In the early 1940s Cybernetics (and its different areas such: Computational Sciences, Neurophysiology, Systems Theory, Game Theory, Control Theory, Biomathematics) converged to the study of control and communication in both living beings and machines.
It was at the end of the 80s that the area that we named by Artificial Intelligence1 was segmented into four major sub-areas:
1. Symbolic Processing.
3. Artificial Life.
4. Reactive Robotics.
Analysis = Representation + Processing
“Learning reveals adaptive changes in the sense that they allow a system to perform the same task, or set of tasks, more efficiently than the first attempt”
(Herbert Simon, 1983)
Quidgest systems have always been synonymous with flexibility and agility for organizations and are the result of the massive use of machines in software production. To ensure the productivity and development of the People, we should let the Machines do whatever they can. In this automation process, Genio takes advantage of standards implemented throughout Quidgest’s thirty years of existence. This process requires a commitment to standardization always having in the line of vision the development of efficient management systems, that means the concept known as Lean. It is in this line that Quidgest looks at Machine Learning as the present and the future. After the focus on data collection Big Data now the focus is to learn from them. This is a concept that is far from being new … What has changed in these years? We have more data, and more computing power and it is this evolution that allows us, today, to talk about Artificial Intelligence in the present and not in the future.
For example, before deciding to implement a neural network we must consider how the data will be scaled, the purpose of the network and what its ideal structure is2.
Minsky, one of the great thinkers of Machine Learning, said: “Learning is about making useful changes in the way our mind works,” and Michalsky complemented this idea: “Learning consists of building and modifying the representations of what we have experienced“.
The concept of Machine Learning can be defined as a technique of data analysis that teaches computers something is innate to humans and animals: to learn through their experience. In the same way that we learn to speak a language, it is also possible to “teach” a computer to be a language expert. Typically, we have two major areas in Machine Learning:
- supervised learning
- unsupervised learning.
In the first set, we fit concepts for which we know the expected inputs and outputs (for example a word in Portuguese and its English translation). In the second set, we only know the inputs and as such we want to know how they relate.
It is in supervised learning that the best results are obtained in terms of prediction since it is possible to establish an expected results base and to evaluate the performance of the algorithm against evidence of validation sets. In the short term, we will be able to see systems and projects in the areas of healthcare, banking and human resources, to mention some of the areas, to apply the knowledge of their consultants to develop mechanisms that not only store data and carry out processes, but also create added value and optimize operations.
Semantic networks, for example, allow grouping and classifying concepts that help to create a Knowledge Base3,4,5
Take for example one-stop or business process management solutions. Or, the user interaction patterns that exist in any other application developed by Quidgest. They correspond to tasks and flows well defined, emerging from empirical definitions of those patterns. Will these flows work in optimal ways? Machine Learning can help redesign these evidence-based processes. One area where Quidgest’s Machine Learning R & D is very focused is RPA (Robotic Process Automation).
Another aspect that can be aided by the Machine Learning is the fight against fraud with the intersection of services or transactions, evidencing logical incompatibilities, that support in the decision making. For human resources we can also have smarter evaluation systems, where we can measure the direct impact of a collaborator on the financial results of an entity, and in the health we can, soon, determine risk factors and simulate how certain behaviors can influence the health of the population in the short, medium and long term.
1 – Coelho, H. (1995). Inteligência Artificial em 25 Lições. Lisboa, Fundação Calouste Gulbenkian.
2 – Karim, M. N., T. Yoshida, et al. (1997). “Global and local neural network models in biotechnology: Application to different cultivation processes.” Journal of Fermentation and Bioengineering 83(1): 1-11.
3- Alonso-Calvo, R., V. Maojo, et al. (2007). “An agent- and ontology-based system for integrating public gene, protein, and disease databases.” J Biomed Inform 40(1): 17-29.
4- Deus, H. F., R. Stanislaus, et al. (2008). “A Semantic Web management model for integrative biomedical informatics.” PLoS One 3(8): e2946.
5- Pesquita, C., D. Faria, et al. (2009). “Semantic similarity in biomedical ontologies.” PLoS Comput Biol 5(7): e1000443.