Not one, but many Artificial Intelligences*
Under the generic designation of artificial intelligence (AI), a diverse array of technologies and practices are grouped together, each having markedly different impacts on our lives.
In reality, there are many AIs, and they often do not get along with each other. Following their evolution over the past 70 years, a prudent approach to avoid falling into the traps of fashion trends, it becomes evident that different AIs even display hostility towards each other, with each wave derogatorily referring to its predecessor and adopting a substantially different perspective.
In the 1950s, the goal was to mimic human intelligence. What later seemed naive encompassed everything within the realm of imagination and expectations in GOFAI (Good Old-Fashioned AI).
In the 1980s, the objective shifted to encapsulating human knowledge in expert systems. These systems made decisions based on pre-defined rules and utilized inference engines to arrive at transparent and explainable conclusions through logic.
In the 2010s, the focus turned to learning (Machine Learning and Deep Learning). If the machine didn’t learn, it wasn’t considered AI. Training was possible, but teaching rules to AI, as had been done in expert systems in the previous wave, was frowned upon.
Presently, AI has reached higher levels of conversational capabilities and has, at last, become wiser in leveraging the previous discoveries of AI. With ChatGPT, AI has surpassed the Turing Test formulated in 1950 for the first time. It embraces the harmonious fusion of rule-based and deep learning approaches. Conversational AI benefits from advancements in both machine learning (for language comprehension) and knowledge-based AI (for reasoning and explanation), enabling interactions with humans that have surprised and enthused everyone.
In just a few months, the conversational (or generative) AI provided by GPT has outperformed traditional Machine Learning for several compelling reasons: encyclopedic knowledge, the ability to generate content rather than providing limited responses to a narrow set of options (yes or no, cat or dog, a number from 0 to 100), structured response models, and, notably, humility and empathy.
The humility displayed by GPT is a highly relevant attribute that should be an essential element in the future regulation of AI. As for empathy, as evaluated by medical professionals, GPT has been deemed more empathetic in communicating with patients than the doctors themselves. Empathy, until just six months ago, was considered a trait unique to humans, making them unbeatable to machines.
It is with deliberate intention that GPT is referenced instead of LLM (Large Language Models), because, perhaps due to an inability to balance as effectively on the shoulders of the giants they stand upon, there are currently no other LLMs that match the level of intelligence exhibited by GPT.
AI is cursed by bullying and excessive optimism
Discussing what constitutes AI and what does not is among the most barren and pointless in the discipline. The concept of AI is constantly redefined and reshaped with each wave, influenced by the dominant technology of the time. Guardians of the concept’s purity shift the requirements to align with their interests. Factors such as traceability, determinism, randomness, learning, explainability, supervision, natural language, and graphic representation come and go in arguments.
AI is variously required to be deterministic or probabilistic, more reliant on brute force (data and computing power) or more “intelligent”, more or less dependent on human feedback/supervision, offering a greater or lesser explanation, and having more or less self-awareness (understood as the capacity to self-evaluate its outputs). If it’s narrow AI, it’s argued it should be general. If it’s general, it’s argued it should be superintelligent. Everything becomes a reason for intimidation and exclusion.
Generally, if a capability has been achieved, it’s no longer considered AI. If it’s not in my research area, it’s also not AI. Without being naive, it’s also recognized that dominant AI techniques are promoted by their major investors, who never denigrate them. It’s very convenient for cloud service providers that current AI, like GPT and other LLMs, require massive, centralized computing and storage volumes.
The room for bullying, speculation, and excessive optimism in each AI spring is vast because, with one honorable exception, all the concepts under AI research are overly broad and subjective. Precisely, universally accepted definitions of “intelligence”, “learning”, or “conversational abilities” do not exist. The exception is logic, knowledge, and the scientific process of knowledge accumulation, which were the focus of research in expert systems during the second wave.
To engage seriously in this debate, enduring criteria that transcend trends, such as the Turing Test, must be employed. ChatGPT3.5 successfully passed this test for the first time at the end of 2022, a feat that deserves acknowledgment.
Generative AI began 30 years ago
Contrary to recent narratives, which are often skewed towards the latest developments and might trace the origins of Generative AI (GenAI) to the founding of OpenAI in 2015, the “Attention Is All You Need” paper in 2017 or even the emergence of ChatGPT in late 2022, GenAI did not start recently. Although different in characteristics from what is currently regarded as generative AI, the expert systems of the 1980s already possessed a significant capability to generate lengthy content. This was achieved, however, at the cost of explicating knowledge, which generally proved to be prohibitively expensive.
Yet, there is an economically significant exception: software development. Those who continued to invest in the research of expert systems for generating complex information systems hold a sustainable advantage not surpassed by current generative AI. Expert systems generate much more code than GPT and do so much faster. The efficiency gains are on the order of 1 to 100,000.
Furthermore, the ability to generate code with expert systems can be integrated downstream in the development process, combined with GPT’s upstream universal knowledge and conversational exploration of options.
Generative AI is already eight times more productive than the previous forefront of software engineering, the low-code platforms. And it could become twenty times more productive with this symbiosis with GPT.
The limitation of AI progress is absurd
In light of recent advancements in AI, much discussion has focused on its regulation and the limitations of its progress. However, the notion of prohibiting scientific advances is both futile and absurd. For every nostalgic individual, dozens of researchers assert, “We want to make AI happen now, not in a hundred years, and by our own hands.”
This sentiment is more than understandable. Human life is finite, and accelerating the future is our opportunity to experience it.
A rational alternative to the prohibition or conditioning of AI advancement is to prohibit (perhaps ‘prevent’ isn’t strong enough to convey the necessary determination) the general public’s ignorance about AI. This approach is akin to ensuring that the population is knowledgeable about essential topics like writing, arithmetic, recycling, climate change, workers’ rights, European integration, the functioning of democracy, or traffic laws.
A society well-informed about artificial intelligence will be better equipped to handle the potential risks associated with AI.
AI risks are not just potential, they are already with us
There are several examples that can be highlighted to raise awareness of the consequences of artificial intelligence that are already evident in our lives:
- The Boeing 737-800 Max disasters
- The algorithms of social networks
- The ease of automatic responses
- The “whims” of what is not understood in its functioning.
The Boeing 737-800 Max disasters
When two Boeing 737-800 Max planes crashed at the end of 2018 and the beginning of 2019, killing hundreds, few perceived these tragedies as a threat stemming from artificial intelligence. However, both planes crashed for the same reasons, and all the warning signs we should be more attentive to regarding the dangers of AI were present in these catastrophes:
- The algorithm made decisions based on incorrect inputs, which it did not autonomously verify.
- The algorithm did not communicate or interact with the pilots or any control tower, directly assuming aircraft control.
- The algorithm did not feel the need to justify its decisions and actions.
- The algorithm did not incorporate an adequate knowledge model, knowledge that would have allowed it, for example, to consider the distance from the ground as a relevant factor in the decision to lower the nose, naturally harmless at 30,000 feet altitude but not at 1,000 feet.
Everything wrong that can be expected from AI in the future was already present in these incidents (in reality, we cannot consider an accident an algorithm executed exactly as it was developed). And yet, AI learned little from these fatal disasters despite all the media attention.
AI will learn if its regulation comes to require, always, the existence of a knowledge model. This could be explicit or implicit in a process similar to reinforcement learning through human feedback that enabled the success of ChatGPT.
Social media algorithms
We have been living with the enduringly perverse effects of social media algorithms (such as those from Facebook, Twitter, or Telegram), search engines, news media, and streaming services (like YouTube or Netflix). We appreciate when these platforms present messages, ideas, and content that resonate with us or that we enjoy hearing.
Gradually, we and our friends begin only to hear what we like and completely ignore everything else. However, on the other side, a group of people only access what they like and whom the algorithm leads to reject everything else.
What is the expected effect of continually receiving ideas that closely match one’s own, being invited to follow people who think exactly like oneself, or being suggested only movies, series, or music that one likes? And what about being offered mechanisms (likes and comments) that reinforce one’s sense of identification with one’s group and even sharpen more aggressive expressions?
We have been subordinated to these algorithms for several decades, and their effect on societies is evident. Proximity, rationality, and objectivity have given way to misinformation and extreme positions, as can be seen in elections in the USA, Brazil, Turkey, Brexit, the invasion of Ukraine, and also in club rivalries. Often, these are entirely thoughtless behaviors, but they are a direct result of the “intelligence” of AI.
The ease of automatic responses
Algorithms are conditioning organizations towards a culture of ease. Suppose we receive a message on a professional network or a business chat channel from someone saying they could not fulfill their commitments or tasks. In that case, the suggested automatic responses might well be “Ok”, “No problem”, or “I understand”. What kind of organizations will we have in the future when accountability becomes just a distant memory?
Demystifying Artificial Intelligence
A lack of understanding of the techniques underpinning ChatGPT fosters either blind adoption or rejection, which can be extremely dangerous for its use.
When consulting scientific articles about ChatGPT (on platforms like arXiv), it is expected to read that ChatGPT “invents”, “hallucinates”, “is terrified of not giving answers”, “loves detailing concepts”, or “refuses to answer what it thinks it doesn’t know”.
To avoid understanding the entire process, including tokenization, pretraining, the base model, supervised fine-tuning, reward modeling, and reinforcement learning from human feedback, ChatGPT is endowed with a will of its own. It is viewed as an independent entity with personality and whims.
Keeping generative AI, such as GPT or Genio, in the scientific and non-emotional domain requires its demystification. Is it too late for that?
The adoption of Generative Artificial Intelligence determines competitiveness
The classification proposed by Everett M. Rogers in “Diffusion of Innovations” determines the innovative capacity of individuals, companies, and countries.
Presently, the adoption of generative AI with GPT determines the competitiveness of professionals in organizations:
- Innovators: They were already following GPT even before ChatGPT 3.5. Now, they are exploring the API, Plugins, and other LLMs.
- Early Adopters: They went to test ChatGPT in its first week. Now, they don’t do anything (texts, meetings, code) without GPT 4.
- Early Majority: They started using GPT for automatic text writing or drafting minutes. They don’t always manage to pose good questions.
- Late Majority: They use it out of curiosity but not for professional purposes. They spend more time finding faults in GPT than leveraging it.
- Laggards: The latecomers are still discussing whether GPT is positive or negative, even if it’s legal. In doubt, they refuse it on ethical grounds.
Companies and organizations should communicate more explicitly to their employees that their stance towards GPT defines today’s profession. Being a competent user in posing questions and cautious in evaluating the responses is the only valid professional perspective.
Workers and leaders who think their job is to prove that GPT doesn’t help them should be alerted that their job is at risk.
And let’s remember the decree of Count Lippe, which stipulated that the sergeant majors of the Portuguese army should know how to read and write because the officers, being nobles, were exempt from this requirement. Organizations are not successful if the hierarchy regarding knowledge mastery is inverted.
A quick assessment of your organization
According to Geoffrey A. Moore in “Crossing the Chasm”, technology companies face a huge challenge in transitioning their products from early adopters to the mainstream market. This transition determines their success. However, ChatGPT didn’t even pause at the chasm. ChatGPT crossed the chasm at a dizzying speed. Suddenly, everyone, absolutely everyone, was talking about AI and ChatGPT.
So, in this case, it’s not the technology that’s being evaluated.
But we can use the “Crossing the Chasm” model to quickly gauge an organization’s success in the face of generative AI technological disruption by looking at the levels of its internal diffusion.
Organizations don’t follow the distribution of the general population. They don’t necessarily have 2.5% innovators and 16% laggards. Some have a majority of innovators and early adopters, allowing them to create and maintain competitive advantages. Others are slow to adopt innovation and are already declining, even if they don’t realize it.
In the face of such a radical change as generative AI, the competitive deficit becomes very visible very quickly. Those who don’t use ChatGPT to generate content or a system like
Genio to generate code will soon make their obsolescence evident.
Complementing GPT with solid knowledge
Creating base models (Large Language Models or LLMs) is within the reach of a few organizations worldwide. However, intelligence and knowledge are substantially more widely distributed worldwide than the ability to create LLMs.
For example, in software engineering, the gaps that a GPT has not yet filled are the areas where expert systems for code generation have built a solid knowledge foundation:
- They know how to ask the right questions (they are excellent prompt engineers).
- They produce hundreds of thousands of lines of code, not just a limited number of tokens.
- They don’t hallucinate and create 100% correct code.
- They generate the entire solution in a single iteration.
The transformative power of Generative Artificial Intelligence
Incorporating Generative AI into the education of young students, especially in the field of Software Engineering, provides them with valuable exposure to modern technologies and methods of development, effectively preparing them to enter the job market and become successful professionals.
We need to get the Ministries of Education and schools to dream of being the first country in the world where software development is as widespread as writing. Think about the real difference between leaving school as a user of YouTube, Facebook, or Office and leaving as a professional in digital transformation.
And yes, the pace of curriculum revisions needs to be adapted to the dynamic evolution of AI.
Artificial Intelligence will not create unemployment
AI will not create unemployment but opportunities. We can foresee the acceleration of the ongoing digital transformation in all knowledge domains. We can be certain of the progressive replacement of current professionals, not by generative AI itself, but by people who master it. We will witness the automation of more and more creative tasks.
And the democratization of knowledge is expected, especially with the possibility of retraining many more individuals as professional software creators.
In the long run, the economy will regulate itself. Inevitable challenges will occur in the short term and on a personal scale. The ability to minimize the negative impacts of this transition in their countries distinguishes the quality of their leaders.
What AI can do, it will do better. Almost immediately, it performs much faster and improves at a much faster pace than human progress. There is no balance between humanity and machines; there is complementarity. Throughout history, the tool defines the human. We must let machines do everything they are capable of doing.
The most pessimistic, the Luddites of today, can rest assured. We certainly will not have widespread unemployment caused by generative AI. There will be a change in professions and tasks, but as long as we don’t know everything, there will always be work in the pursuit of knowledge, and as long as we are not eternal, there will always be work in the health sciences. The same applies to goals that are never completely attainable, such as social proximity, leisure, justice, or eliminating our ecological footprint.
Now, we’re going to take advantage of a technological revolution
Finally, we have the economic impact. In Portugal, the impact of this new wave of artificial intelligence is likely to be, unfortunately, the same as that of any previous technological revolution: we risk being mere consumers. And we may go into debt to pay for the artificial intelligence products and services created in other countries.
It has been this way for at least the past 150 years. The country’s relationship with technological development oscillates between adoption through foreign purchases and rejection. Adoption through imports leads to indebtedness and dependence. Rejection has an even more negative impact, leading to underdevelopment.
The alternative scenario depends on the question: What role do we want for our country in the international economic order reshaped by the new generation of AI? The desirable scenario would be growth based on knowledge, merit appreciation, rapid application of results, and deep integration into the value creation networks associated with generative AI.
This scenario would require opinion leaders in Portugal to stop thinking as consumers, employees of others, or mere newspaper readers and become committed actors discussing strategic visions for our shared future. We have another opportunity to do so!
*This article is part of the book “88 Vozes sobre a Inteligência Artificial”, which can be purchased at ISCTE Executive Education and www.leyaonline.com.