The definition of artificial intelligence cited in the preamble, given by John McCarthy in 1956 at a conference at Dartmouth University, is not directly related to the understanding of human intelligence. According to McCarthy, AI researchers are free to use techniques not seen in humans if needed to solve specific problems.

At the same time, there is a point of view according to which intelligence can only be a biological phenomenon.

As the chairman of the St. Petersburg branch of the Russian Association of Artificial Intelligence T. A. Gavrilova points out, in English the phrase artificial intelligence does not have that slightly fantastic anthropomorphic overtones that it acquired in the rather unsuccessful Russian translation. Word intelligence means “the ability to reason rationally”, and not at all “intelligence”, for which there is an English analogue intelligence .

Participants of the Russian Association of Artificial Intelligence give the following definitions of artificial intelligence:

One of the particular definitions of intelligence, common to a person and a “machine,” can be formulated as follows: “Intelligence is the ability of a system to create programs (primarily heuristic) during self-learning to solve problems of a certain class of complexity and solve these problems.”

Prerequisites for the development of artificial intelligence science

The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been debates about the nature of man and the process of understanding the world, neurophysiologists and psychologists had developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions about optimal calculations and the presentation of knowledge about the world in in a formalized form; finally, the foundation of the mathematical theory of calculations - the theory of algorithms - was born and the first computers were created.

The capabilities of new machines in terms of computing speed turned out to be greater than human ones, so the question arose in the scientific community: what are the limits of computer capabilities and will machines reach the level of human development? In 1950, one of the pioneers in the field of computing, English scientist Alan Turing, wrote an article entitled “Can a Machine Think?” , which describes a procedure by which it will be possible to determine the moment when a machine becomes equal to a person in terms of intelligence, called the Turing test.

History of the development of artificial intelligence in the USSR and Russia

In the USSR, work in the field of artificial intelligence began in the 1960s. A number of pioneering studies were carried out at Moscow University and the Academy of Sciences, headed by Veniamin Pushkin and D. A. Pospelov. Since the early 1960s, M. L. Tsetlin and his colleagues have been developing issues related to training finite state machines.

In 1964, the work of Leningrad logician Sergei Maslov, “The Inverse Method for Establishing Derivability in Classical Predicate Calculus,” was published, in which he was the first to propose a method for automatically searching for proofs of theorems in predicate calculus.

Until the 1970s in the USSR, all AI research was carried out within the framework of cybernetics. According to D. A. Pospelov, the sciences “computer science” and “cybernetics” were mixed at that time due to a number of academic disputes. Only in the late 1970s in the USSR they began to talk about the scientific direction “artificial intelligence” as a branch of computer science. At the same time, computer science itself was born, subordinating its ancestor “cybernetics”. In the late 1970s, an explanatory dictionary on artificial intelligence, a three-volume reference book on artificial intelligence, and an encyclopedic dictionary on computer science were created, in which the sections “Cybernetics” and “Artificial Intelligence” are included, along with other sections, in computer science. The term “computer science” became widespread in the 1980s, and the term “cybernetics” gradually disappeared from circulation, remaining only in the names of those institutions that arose during the era of the “cybernetic boom” of the late 1950s - early 1960s. This view of artificial intelligence, cybernetics and computer science is not shared by everyone. This is due to the fact that in the West the boundaries of these sciences are somewhat different.

Approaches and directions

Approaches to understanding the problem

There is no single answer to the question of what artificial intelligence does. Almost every author who writes a book about AI starts from some definition, considering the achievements of this science in its light.

  • top-down AI), semiotic - creation of expert systems, knowledge bases and logical inference systems that simulate high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc.;
  • Bottom-Up AI), biological - the study of neural networks and evolutionary computations that model intelligent behavior based on biological elements, as well as the creation of corresponding computing systems, such as a neurocomputer or a biocomputer.

The latter approach, strictly speaking, does not belong to the science of AI in the sense given by John McCarthy - they are united only by a common final goal.

The Turing Test and the Intuitive Approach

This approach focuses on those methods and algorithms that will help an intelligent agent survive in its environment while performing its task. So, here the algorithms for finding a path and making decisions are studied much more carefully.

Hybrid approach

Hybrid approach assumes that only the synergistic combination of neural and symbolic models achieves the full range of cognitive and computational capabilities. For example, expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning. Proponents of this approach believe that hybrid information systems will be much stronger than the sum of the various concepts separately.

Research models and methods

Symbolic modeling of thought processes

Analyzing the history of AI, we can identify such a broad area as modeling reasoning. For many years, the development of this science has moved precisely along this path, and now it is one of the most developed areas in modern AI. Modeling reasoning involves the creation of symbolic systems, the input of which is set to a certain task, and the output requires its solution. As a rule, the proposed problem has already been formalized, that is, translated into mathematical form, but either does not have a solution algorithm, or it is too complex, time-consuming, etc. This area includes: proving theorems, making decisions and game theory, planning and dispatching, forecasting.

Working with Natural Languages

An important direction is natural language processing, within which the analysis of the capabilities of understanding, processing and generating texts in “human” language is carried out. This direction aims to process natural language in such a way that one would be able to acquire knowledge independently by reading existing text available on the Internet. Some direct applications of natural language processing include information retrieval (including deep text mining) and machine translation.

Representation and use of knowledge

Direction knowledge engineering combines the tasks of obtaining knowledge from simple information, their systematization and use. This direction is historically associated with the creation expert systems- programs that use specialized knowledge bases to obtain reliable conclusions on any problem.

Producing knowledge from data is one of the basic problems of data mining. There are various approaches to solving this problem, including those based on neural network technology, using neural network verbalization procedures.

Machine learning

Issues machine learning concerns the process independent acquisition of knowledge by an intelligent system in the process of its operation. This direction has been central since the very beginning of the development of AI. In 1956, at the Dartmund Summer Conference, Ray Solomonoff wrote a report on a probabilistic unsupervised learning machine, calling it "The Inductive Inference Machine."

Robotics

Machine creativity

The nature of human creativity is even less studied than the nature of intelligence. Nevertheless, this area exists, and the problems of computer writing music, literary works (often poetry or fairy tales), and artistic creation are posed here. Creating realistic images is widely used in the film and gaming industries.

The study of problems of technical creativity of artificial intelligence systems stands out separately. The theory of solving inventive problems, proposed in 1946 by G. S. Altshuller, marked the beginning of such research.

Adding this capability to any intelligent system allows you to very clearly demonstrate what exactly the system perceives and how it understands it. By adding noise instead of missing information or filtering noise with knowledge available in the system, abstract knowledge is produced into concrete images that are easily perceived by a person, this is especially useful for intuitive and low-value knowledge, the verification of which in a formal form requires significant mental effort.

Other areas of research

Finally, there are many applications of artificial intelligence, each of which forms an almost independent field. Examples include programming intelligence in computer games, nonlinear control, intelligent information security systems.

In the future, it is assumed that the development of artificial intelligence will be closely connected with the development of a quantum computer, since some properties of artificial intelligence have similar operating principles to quantum computers.

It can be seen that many areas of research overlap. This is typical of any science. But in artificial intelligence, the relationship between seemingly different areas is especially strong, and this is associated with the philosophical debate about strong and weak AI.

Modern artificial intelligence

Two directions of AI development can be distinguished:

  • solving problems associated with bringing specialized AI systems closer to human capabilities and their integration, which is realized by human nature ( see Intelligence Enhancement);
  • the creation of artificial intelligence, representing the integration of already created AI systems into a single system capable of solving the problems of humanity ( see Strong and weak artificial intelligence).

But at the moment, the field of artificial intelligence is seeing the involvement of many subject areas that have a practical relationship to AI rather than a fundamental one. Many approaches have been tested, but no research group has yet approached the emergence of artificial intelligence. Below are just some of the most famous developments in the field of AI.

Application

Some of the most famous AI systems are:

Banks use artificial intelligence systems (AI) in insurance activities (actuarial mathematics), when playing on the stock exchange and in property management. Pattern recognition methods (including both more complex and specialized and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, in air defense systems (target identification), as well as to ensure a number of other national security tasks.

Psychology and cognitive science

Cognitive modeling methodology is designed to analyze and make decisions in ill-defined situations. It was proposed by Axelrod.

It is based on modeling the subjective ideas of experts about the situation and includes: methodology for structuring the situation: a model for representing the expert’s knowledge in the form of a signed digraph (cognitive map) (F, W), where F is the set of factors of the situation, W is the set of cause-and-effect relationships between the factors of the situation ; methods of situation analysis. Currently, the methodology of cognitive modeling is developing in the direction of improving the apparatus for analyzing and modeling the situation. Models for forecasting the development of the situation are proposed here; methods for solving inverse problems.

Philosophy

The science of “creating artificial intelligence” could not help but attract the attention of philosophers. With the advent of the first intelligent systems, fundamental questions about man and knowledge, and partly about the world order, were raised.

Philosophical problems of creating artificial intelligence can be divided into two groups, relatively speaking, “before and after the development of AI.” The first group answers the question: “What is AI, is it possible to create it, and, if possible, how to do it?” The second group (ethics of artificial intelligence) asks the question: “What are the consequences of creating AI for humanity?”

The term “strong artificial intelligence” was introduced by John Searle, and the approach is characterized in his words:

Moreover, such a program would not simply be a model of the mind; she, in the literal sense of the word, herself will be the mind, in the same sense in which the human mind is the mind.

At the same time, it is necessary to understand whether a “pure artificial” mind (“metamind”) is possible, understanding and solving real problems and, at the same time, devoid of emotions characteristic of a person and necessary for his individual survival [ ] .

In contrast, proponents of weak AI prefer to view programs only as tools that allow them to solve certain problems that do not require the full range of human cognitive abilities.

Ethics

Other traditional faiths rarely describe the issues of AI. But some theologians nevertheless pay attention to this. For example, Archpriest Mikhail Zakharov, arguing from the point of view of the Christian worldview, poses the following question: “Man is a rationally free being, created by God in His image and likeness. We are accustomed to attributing all these definitions to the biological species Homo Sapiens. But how justified is this? . He answers this question like this:

If we assume that research in the field of artificial intelligence will someday lead to the emergence of an artificial being that is superior in intelligence to humans and has free will, would this mean that this being is a person? ... man is God's creation. Can we call this creature a creation of God? At first glance, it is a human creation. But even during the creation of man, it is hardly worth understanding literally that God sculpted the first man from clay with His own hands. This is probably an allegory indicating the materiality of the human body, created by the will of God. But without the will of God nothing happens in this world. Man, as a co-creator of this world, can, fulfilling the will of God, create new creatures. Such creatures, created by human hands according to God's will, can probably be called creations of God. After all, man creates new species of animals and plants. And we consider plants and animals to be God’s creations. The same can be applied to an artificial being of a non-biological nature.

Science fiction

The topic of AI is considered from different angles in the works of Robert Heinlein: the hypothesis of the emergence of self-awareness of AI when the structure becomes more complex beyond a certain critical level and there is interaction with the outside world and other carriers of intelligence (“The Moon Is a Harsh Mistress”, “Time Enough For Love”, characters Mycroft, Dora and Aya in the “History of the Future” series), problems of AI development after hypothetical self-awareness and some social and ethical issues (“Friday”). The socio-psychological problems of human interaction with AI are also considered in Philip K. Dick’s novel “Do Androids Dream of Electric Sheep? ", also known for the film adaptation of Blade Runner.

The works of science fiction writer and philosopher Stanislaw Lem describe and largely anticipate the creation of virtual reality, artificial intelligence, nanorobots and many other problems of the philosophy of artificial intelligence. It is especially worth noting the futurology of Sum technology. In addition, in the adventures of Iyon the Quiet, the relationship between living beings and machines is repeatedly described: the rebellion of the on-board computer with subsequent unexpected events (11th journey), the adaptation of robots in human society (“The Washing Tragedy” from “Memoirs of Iyon the Quiet”), the creation of absolute order on the planet by processing living residents (24th journey), inventions of Corcoran and Diagoras (“Memoirs of Ijon the Quiet”), a psychiatric clinic for robots (“Memoirs of Ijon the Quiet”). In addition, there is a whole cycle of novels and short stories Cyberiad, where almost all the characters are robots, who are distant descendants of robots that escaped from people (they call people pallids and consider them mythical creatures).

Movies

Almost since the 1960s, along with the writing of science fiction stories and novellas, films about artificial intelligence have been made. Many stories by authors recognized throughout the world are filmed and become classics of the genre, others become a milestone in development

Artificial Intelligence (AI)(English) Artificial intelligence (AI) is the science and development of intelligent machines and systems, especially intelligent computer programs, aimed at understanding human intelligence. However, the methods used are not necessarily biologically plausible. But the problem is that it is unknown which computational procedures we want to call intelligent. And since we understand only some of the mechanisms of intelligence, then by intelligence within this science we understand only the computational part of the ability to achieve goals in the world.

Various types and degrees of intelligence exist in many people, animals and some machines, intelligent information systems and various models of expert systems with different knowledge bases. At the same time, as we see, this definition of intelligence is not related to the understanding of human intelligence - these are different things. Moreover, this science models human intelligence, since, on the one hand, one can learn something about how to get machines to solve problems by observing other people, and on the other hand, most work in AI studies the problems that humanity needs to solve in industrial and technological sense. Therefore, AI researchers are free to use techniques that are not observed in humans if necessary to solve specific problems.

It is in this sense that the term was introduced by J. McCarthy in 1956 at a conference at Dartmouth University, and until now, despite criticism from those who believe that intelligence is only a biological phenomenon, in the scientific community the term has retained its original meaning, despite obvious contradictions from the point of view of human intelligence.

In philosophy, the question of the nature and status of human intellect has not been resolved. There is also no exact criterion for computers to achieve “intelligence,” although at the dawn of artificial intelligence a number of hypotheses were proposed, for example, the Turing test or the Newell-Simon hypothesis. Therefore, despite the many approaches to both understanding AI problems and creating intelligent information systems, two main approaches to the development of AI can be distinguished:

· descending (English) Top-Down AI), semiotic – the creation of expert systems, knowledge bases and logical inference systems that simulate high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc.;

· ascending Bottom-Up AI), biological – the study of neural networks and evolutionary computations that model intelligent behavior based on smaller “non-intelligent” elements.

The latter approach, strictly speaking, does not relate to the science of artificial intelligence in the sense given by J. McCarthy; they are united only by a common final goal.

The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been debates about the nature of man and the process of understanding the world, neurophysiologists and psychologists had developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions about optimal calculations and the presentation of knowledge about the world in in a formalized form; finally, the foundation of the mathematical theory of calculations - the theory of algorithms - was born, and the first computers were created.

The capabilities of new machines in terms of computing speed turned out to be greater than human ones, so the scientific community raised the question: what are the limits of computer capabilities and will machines reach the level of human development? In 1950, one of the pioneers in the field of computing, the English scientist Alan Turing, in the article “Can a Machine Think?”, provides answers to similar questions and describes a procedure by which it will be possible to determine the moment when a machine becomes equal in terms of intelligence. with a person, called the Turing test.

The Turing test is an empirical test proposed by Alan Turing in his 1950 paper "Computing Machines and Minds" in the philosophy journal Mind" The purpose of this test is to determine the possibility of artificial thinking close to human. The standard interpretation of this test is: “A person interacts with one computer and one person. Based on the answers to the questions, he must determine who he is talking to: a person or a computer program. The purpose of a computer program is to mislead a person into making the wrong choice.” All test participants cannot see each other.

There are three approaches to defining artificial intelligence:

1) Logical approach towards the creation of artificial intelligence systems is aimed at creating expert systems with logical models of knowledge bases using the language of predicates. The language and logic programming system Prolog was adopted as the educational model for artificial intelligence systems in the 80s. Knowledge bases written in the Prolog language represent sets of facts and rules of logical inference written in the logical language. The logical model of knowledge bases allows you to record not only specific information and data in the form of facts in the Prolog language, but also generalized information using rules and procedures of logical inference, including logical rules for defining concepts that express certain knowledge as specific and generalized information. In general, research into the problems of artificial intelligence in computer science within the framework of a logical approach to the design of knowledge bases and expert systems is aimed at the creation, development and operation of intelligent information systems, including the issues of teaching students and schoolchildren, as well as training users and developers of such intelligent information systems.

2) Agent-based approach has been developing since the early 1990s. According to this approach, intelligence is the computational part (planning) of the ability to achieve the goals set for an intelligent machine. Such a machine itself will be an intelligent agent, perceiving the world around it using sensors and capable of influencing objects in the environment using actuators. This approach focuses on those methods and algorithms that will help the intelligent agent survive in the environment while performing its task. Thus, search and decision-making algorithms are studied much more strongly here.

3) Intuitive approach assumes that AI will be able to exhibit behavior no different from humans, and in normal situations. This idea is a generalization of the Turing test approach, which states that a machine will become intelligent when it is able to carry on a conversation with an ordinary person, and he will not be able to understand that he is talking to the machine (the conversation is carried out by correspondence).

The definition selected the following areas of research in the field of AI:

- Symbolic modeling of thought processes.

Analyzing the history of AI, we can highlight such a broad area as reasoning modeling. For many years, the development of AI as a science has moved precisely along this path, and now it is one of the most developed areas in modern AI. Modeling reasoning involves the creation of symbolic systems, the input of which is a certain problem, and the output requires its solution. As a rule, the proposed problem has already been formalized, that is, translated into mathematical form, but either does not have a solution algorithm, or it is too complex, time-consuming, etc. This area includes: proof of theorems, decision making and game theory, planning and dispatching , forecasting.

- Working with natural languages.

An important area is natural language processing, which involves analyzing the capabilities of understanding, processing and generating texts in “human” language. In particular, the problem of machine translation of texts from one language to another has not yet been solved. In the modern world, the development of information retrieval methods plays an important role. By its nature, the original Turing test is related to this direction.

- Accumulation and use of knowledge.

According to many scientists, an important property of intelligence is the ability to learn. Thus, knowledge engineering comes to the fore, combining the tasks of obtaining knowledge from simple information, its systematization and use. Advances in this area affect almost every other area of ​​AI research. Here, too, two important subareas cannot be overlooked. The first of them - machine learning - concerns the process of independent acquisition of knowledge by an intelligent system in the process of its operation. The second is associated with the creation of expert systems - programs that use specialized knowledge bases to obtain reliable conclusions on any problem.

The field of machine learning includes a large class of pattern recognition problems. For example, this is character recognition, handwritten text, speech, text analysis. Many problems are successfully solved using biological modeling. Biological modeling

There are great and interesting achievements in the field of modeling biological systems. Strictly speaking, this can include several independent directions. Neural networks are used to solve fuzzy and complex problems such as geometric shape recognition or object clustering. The genetic approach is based on the idea that an algorithm can become more efficient if it borrows better characteristics from other algorithms (“parents”). A relatively new approach, where the task is to create an autonomous program - an agent that interacts with the external environment, is called the agent approach. Particularly worth mentioning is computer vision, which is also associated with robotics.

- Robotics.

In general, robotics and artificial intelligence are often associated with each other. The integration of these two sciences, the creation of intelligent robots, can be considered another area of ​​AI.

- Machine creativity.

The nature of human creativity is even less studied than the nature of intelligence. Nevertheless, this area exists, and the problems of computer writing music, literary works (often poetry or fairy tales), and artistic creation are posed here. Creating realistic images is widely used in the film and gaming industries. Adding this feature to any intelligent system allows you to very clearly demonstrate what exactly the system perceives and how it understands it. By adding noise instead of missing information or filtering noise with knowledge available in the system, it produces concrete images from abstract knowledge that are easily perceived by a person, this is especially useful for intuitive and low-value knowledge, the verification of which in a formal form requires significant mental effort.

- Other areas of research.

There are many applications of artificial intelligence, each of which forms an almost independent direction. Examples include programming intelligence in computer games, nonlinear control, and intelligent information security systems.

Approaches to creating intelligent systems. The symbolic approach allows you to operate with weakly formalized representations and their meanings. Efficiency and overall effectiveness depend on the ability to highlight only essential information. The breadth of classes of problems effectively solved by the human mind requires incredible flexibility in abstraction methods. Not accessible with any engineering approach that the researcher initially chooses based on a deliberately flawed criterion, for its ability to quickly provide an effective solution to some problem that is closest to this researcher. That is, for a single model of abstraction and construction of entities already implemented in the form of rules. This results in significant expenditure of resources for non-core tasks, that is, the system returns from intelligence to brute force on most tasks and the very essence of intelligence disappears from the project.

It is especially difficult without symbolic logic when the task is to develop rules since their components, not being full-fledged units of knowledge, are not logical. Most studies stop at the impossibility of at least identifying new difficulties that have arisen using the symbolic systems chosen at the previous stages. Moreover, solve them, and especially train the computer to solve them, or at least identify and get out of such situations.

Historically, the symbolic approach was the first in the era of digital machines, since it was after the creation of Lisp, the first symbolic computing language, that its author became confident in the possibility of practically starting to implement these means of intelligence. Intelligence as such, without any reservations or conventions.

The creation of hybrid intelligent systems in which several models are used at once is widely practiced. Expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning.

Development of the theory of fuzzy sets. The development of the theory of fuzzy sets began with the article “Fuzzy Sets,” published by US professor Lotfi Zadeh, who first introduced the concept of a fuzzy set, proposed the idea and the first concept of a theory that made it possible to fuzzyly describe real systems. The most important direction of the theory of fuzzy sets is fuzzy logic, used to control systems, as well as in experiments on the formation of their models.

The 60s began a period of rapid development of computers and digital technologies based on binary logic. At that time, it was believed that the use of this logic would allow solving many scientific and technical problems. For this reason, the emergence of fuzzy logic remained almost unnoticed, despite all its conceptual revolutionary nature. However, the importance of fuzzy logic has been recognized by a number of representatives of the scientific community and it has been developed as well as practical implementation in various industrial applications. After some time, interest in it began to increase on the part of scientific schools that united adherents of technologies based on binary logic. This happened due to the fact that quite a lot of practical problems were discovered that could not be solved using traditional mathematical models and methods, despite the significantly increased available computational speeds. A new methodology was required, the characteristic features of which were to be found in fuzzy logic.

Like robotics, fuzzy logic was met with great interest not in its country of origin, the United States, but beyond its borders, and as a consequence, the first experience of industrial use of fuzzy logic - to control boiler installations of power plants - is associated with Europe. All attempts to use traditional methods, sometimes very intricate, to control a steam boiler ended in failure - this nonlinear system turned out to be so complex. And only the use of fuzzy logic made it possible to synthesize a controller that satisfied all the requirements. In 1976, fuzzy logic was used as the basis for an automatic control system for a rotary kiln in cement production. However, the first practical results of using fuzzy logic, obtained in Europe and America, did not cause any significant increase in interest in it. Just as it was with robotics, the country that was the first to begin the widespread implementation of fuzzy logic, realizing its enormous potential, was Japan.

Among the applied fuzzy systems created in Japan, the most famous is the subway train control system developed by Hitachi in Sendai. The project was implemented with the participation of an experienced driver, whose knowledge and experience formed the basis for the developed control model. The system automatically reduced the speed of the train as it approached the station, ensuring a stop at the required location. Another advantage of the train was its high comfort, due to the smooth acceleration and deceleration. There were a number of other advantages compared to traditional control systems.

The rapid development of fuzzy logic in Japan has led to its practical applications not only in industry, but also in the production of consumer goods. An example here is a video camera equipped with a fuzzy image stabilization subsystem, which was used to compensate for image fluctuations caused by the operator’s inexperience. This problem was too complex to be solved by traditional methods, since it was necessary to distinguish random fluctuations in the image from the purposeful movement of objects being photographed (for example, the movement of people).

Another example is the automatic washing machine, which is operated at the touch of a button (Zimmerman 1994). This “integrity” aroused interest and was met with approval. The use of fuzzy logic methods made it possible to optimize the washing process, providing automatic recognition of the type, volume and degree of soiling of clothes, not to mention the fact that reducing the machine control mechanism to one single button made it significantly easier to handle.

Fuzzy logic inventions have been implemented by Japanese firms in many other devices, including microwave ovens (Sanyo), anti-lock braking systems and automatic transmissions (Nissan), Integrated Vehicle Dynamics Control (INVEC), and hard drive controllers in computers. ensuring a reduction in access time to information.

In addition to the applications mentioned above, since the early 90s. There is an intensive development of fuzzy methods within a number of applied areas, including those not related to technology:

Electronic pacemaker control system;

Motor vehicle control system;

Cooling systems;

Air conditioners and ventilation equipment;

Waste incineration equipment;

Glass melting furnace;

Blood pressure monitoring system;

Diagnosis of tumors;

Diagnosis of the current state of the cardiovascular system;

Control system for cranes and bridges;

Image processing;

Fast charger;

Word recognition;

Bioprocessor management;

Electric motor control;

Welding equipment and welding processes;

Traffic control systems;

Biomedical Research;

Water treatment plants.

At the moment, in the creation of artificial intelligence (in the original sense of the word, expert systems and chess programs do not belong here), there is an intensive grinding of all subject areas that have at least some relation to AI into knowledge bases. Almost all approaches have been tested, but not a single research group has approached the emergence of artificial intelligence.

AI research has joined the general stream of singularity technologies (species leap, exponential human development), such as computer science, expert systems, nanotechnology, molecular bioelectronics, theoretical biology, quantum theory(s), nootropics, extrophiles, etc. see daily stream Kurzweil News, MIT.

The results of developments in the field of AI have entered higher and secondary education in Russia in the form of computer science textbooks, where issues of working and creating knowledge bases, expert systems based on personal computers based on domestic logic programming systems are now studied, as well as studying fundamental issues of mathematics and computer science using examples working with models of knowledge bases and expert systems in schools and universities.

The following artificial intelligence systems have been developed:

1. Deep Blue - defeated the world chess champion. (The match between Kasparov and supercomputers did not bring satisfaction to either computer scientists or chess players, and the system was not recognized by Kasparov, although the original compact chess programs are an integral element of chess creativity. Then the IBM line of supercomputers appeared in the brute force projects BluGene (molecular modeling) and modeling of the pyramidal cell system at the Swiss Blue Brain Center. This story is an example of the intricate and secretive relationship between AI, business and national strategic objectives.)

2. Mycin was one of the early expert systems that could diagnose a small set of diseases, often as accurately as doctors.

3. 20q is a project based on AI ideas, based on the classic game “20 Questions”. It became very popular after appearing on the Internet on the website 20q.net.

4. Speech recognition. Systems such as ViaVoice are capable of serving consumers.

5. Robots compete in a simplified form of football in the annual RoboCup tournament.

Banks use artificial intelligence systems (AI) in insurance activities (actuarial mathematics) when playing on the stock exchange and property management. In August 2001, robots beat humans in an impromptu trading competition (BBC News, 2001). Pattern recognition methods (including both more complex and specialized and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, in air defense systems (target identification), and also to ensure a number of other national security tasks.

Computer game developers are forced to use AI of varying degrees of sophistication. Standard tasks of AI in games are finding a path in two-dimensional or three-dimensional space, simulating the behavior of a combat unit, calculating the correct economic strategy, and so on.

Artificial intelligence is closely related to transhumanism. And together with neurophysiology, epistemology, cognitive psychology, it forms a more general science called cognitive science. Philosophy plays a special role in artificial intelligence. Also, epistemology - the science of knowledge within the framework of philosophy - is closely related to the problems of artificial intelligence. Philosophers working on this topic are grappling with questions similar to those faced by AI engineers about how best to represent and use knowledge and information. Producing knowledge from data is one of the basic problems of data mining. There are various approaches to solving this problem, including those based on neural network technology, using neural network verbalization procedures.

In computer science, problems of artificial intelligence are considered from the perspective of designing expert systems and knowledge bases. Knowledge bases are understood as a set of data and inference rules that allow logical inference and meaningful processing of information. In general, research into problems of artificial intelligence in computer science is aimed at the creation, development and operation of intelligent information systems, including issues of training users and developers of such systems.

The science of “creating artificial intelligence” could not help but attract the attention of philosophers. With the advent of the first intelligent systems, fundamental questions about man and knowledge, and partly about the world order, were raised. On the one hand, they are inextricably linked with this science, and on the other, they introduce some chaos into it. Philosophical problems of creating artificial intelligence can be divided into two groups, relatively speaking, “before and after the development of AI.” The first group answers the question: “What is AI, is it possible to create it, and, if possible, how to do it?” The second group (ethics of artificial intelligence) asks the question: “What are the consequences of creating AI for humanity?”

Issues of creating artificial intelligence. Two directions for the development of AI are visible: the first - in solving problems associated with bringing specialized AI systems closer to human capabilities, and their integration, which is realized by human nature, the second - in the creation of Artificial Intelligence, which represents the integration of already created AI systems into a single system capable of solving problems humanity.

Among AI researchers, there is still no dominant point of view on the criteria of intelligence, the systematization of goals and tasks to be solved, there is not even a strict definition of science. There are different points of view on the question of what is considered intelligence. The analytical approach involves the analysis of a person’s higher nervous activity to the lowest, indivisible level (the function of higher nervous activity, an elementary reaction to external stimuli (stimuli), irritation of the synapses of a set of neurons connected by function) and the subsequent reproduction of these functions.

Some experts mistake the ability to make rational, motivated choices in conditions of lack of information as intelligence. That is, an intellectual program is simply considered to be that program of activity that can choose from a certain set of alternatives, for example, where to go in the case of “you will go left...”, “you will go right...”, “you will go straight...”.

The most heated debate in the philosophy of artificial intelligence is the question of the possibility of thinking created by human hands. The question “Can a machine think?”, which prompted researchers to create the science of simulating the human mind, was posed by Alan Turing in 1950. The two main points of view on this issue are called the hypotheses of strong and weak artificial intelligence.

The term “strong artificial intelligence” was introduced by John Searle, and in his words the approach is characterized: “Such a program will not just be a model of the mind; she, in the literal sense of the word, herself will be the mind, in the same sense in which the human mind is the mind.” In contrast, proponents of weak AI prefer to view programs only as tools that allow them to solve certain problems that do not require the full range of human cognitive abilities.

John Searle's "Chinese Room" thought experiment argues that passing the Turing test is not a criterion for a machine to have a genuine thought process. Thinking is the process of processing information stored in memory: analysis, synthesis and self-programming. A similar position is taken by Roger Penrose, who in his book “The King's New Mind” argues for the impossibility of obtaining the thinking process on the basis of formal systems.


6. Computing devices and microprocessors.

A microprocessor (MP) is a device that receives, processes and issues information. Structurally, the MP contains one or more integrated circuits and performs actions defined by a program stored in memory. (Fig. 6.1)

Figure 6.1– MP appearance

Early processors were created as unique components for one-of-a-kind computer systems. Later, computer manufacturers moved from the expensive method of developing processors designed to run one single or a few highly specialized programs to mass production of typical classes of multi-purpose processor devices. The trend towards standardization of computer components arose during the era of rapid development of semiconductor elements, mainframes and minicomputers, and with the advent of integrated circuits it became even more popular. The creation of microcircuits made it possible to further increase the complexity of CPUs while simultaneously reducing their physical size.

The standardization and miniaturization of processors has led to the deep penetration of digital devices based on them into everyday human life. Modern processors can be found not only in high-tech devices such as computers, but also in cars, calculators, mobile phones and even children's toys. Most often they are represented by microcontrollers, where, in addition to the computing device, additional components are located on the chip (program and data memory, interfaces, input/output ports, timers, etc.). The computing capabilities of the microcontroller are comparable to the processors of personal computers ten years ago, and more often than not even significantly exceed their performance.

A microprocessor system (MPS) is a computing, instrumentation or control system in which the main information processing device is the MP. The microprocessor system is built from a set of microprocessor LSIs (Fig. 6.2).

Figure 6.2– Example of a microprocessor system

The clock pulse generator sets a time interval, which is a unit of measurement (quantum) for the duration of the command execution. The higher the frequency, the faster the MPS is, all other things being equal. MP, RAM and ROM are integral parts of the system. Input and output interfaces - devices for interfacing MPS with input and output blocks. Measuring instruments are characterized by input devices in the form of a push-button remote control and measuring converters (ADCs, sensors, digital information input units). Output devices usually represent digital displays, a graphic screen (display), and external devices for interface with the measuring system. All MPS blocks are interconnected by digital information transmission buses. The MPS uses the backbone communication principle, in which blocks exchange information via a single data bus. The number of lines in the data bus usually corresponds to the MPS capacity (the number of bits in a data word). The address bus is used to indicate the direction of data transfer - it transmits the address of a memory cell or I/O block that is currently receiving or transmitting information. The control bus is used to transmit signals synchronizing the entire operation of the MPS.

The construction of the IPS is based on three principles:

Mainline;

Modularity;

Microprogram control.

The principle of trunking - determines the nature of the connections between the functional blocks of the MPS - all blocks are connected to a single system bus.

The principle of modularity is that the system is built on the basis of a limited number of types of structurally and functionally complete modules.

The principles of trunking and modularity make it possible to increase the control and computing capabilities of the MP by connecting other modules to the system bus.

The principle of microprogram control is the ability to carry out elementary operations - microcommands (shifts, information transfers, logical operations), with the help of which a technological language is created, i.e. a set of commands that best suits the purpose of the system.

According to their purpose, MPs are divided into universal and specialized.

Universal microprocessors are general-purpose microprocessors that solve a wide class of computing, processing and control problems. An example of the use of universal MPs are computers built on IBM and Macintosh platforms.

Specialized microprocessors are designed to solve problems of only a certain class. Specialized MPs include: signaling, multimedia MPs and transputers.

Signal processors (DSPs) are designed for real-time digital signal processing (for example, signal filtering, convolution calculation, correlation function calculation, signal limiting and conditioning, performing forward and inverse Fourier transforms). (Figure 6.3) Signal processors include processors from Texas Instruments - TMS320C80, Analog Devices - - ADSP2106x, Motorola -DSP560xx and DSP9600x.

Figure 6.3– Example of internal DSP structure

Media and multimedia processors are designed to process audio signals, graphic information, video images, as well as to solve a number of problems in multimedia computers, game consoles, and household appliances. These processors include processors from MicroUnity - Mediaprocessor, Philips - Trimedia, Cromatic Research - Mpact Media Engine, Nvidia - NV1, Cyrix - MediaGX.

Transputers are designed to organize massively parallel calculations and work in multiprocessor systems. They are characterized by the presence of internal memory and a built-in interprocessor interface, i.e., communication channels with other MP LSIs.

Based on the type of architecture, or the principle of construction, a distinction is made between MPs with von Neumann architecture and MPs with Harvard architecture.

The concept of microprocessor architecture defines its component parts, as well as the connections and interactions between them.

Architecture includes:

MP block diagram;

MP software model (description of register functions);

Information about memory organization (capacity and memory addressing methods);

Description of the organization of input/output procedures.

Fonneumann architecture (Fig. 6.4, a) was proposed in 1945 by the American mathematician Joe von Neumann. Its peculiarity is that the program and data are located in shared memory, which is accessed via one data and command bus.

Harvard architecture was first implemented in 1944 in the relay computer at Harvard University (USA). A feature of this architecture is that the data memory and program memory are separated and have separate data buses and command buses (Fig. 6.4, b), which makes it possible to increase the performance of the MP system.

Figure 6.4. Main types of architecture: (a - von Neumann; 6 - Harvard)

Based on the type of instruction system, a distinction is made between CISC (Complete Instruction Set Computing) processors with a full set of instructions (typical representatives of CISC are the Intel x86 microprocessor family) and RISC processors(Reduced Instruction Set Computing) with a reduced set of instructions (characterized by the presence of fixed-length instructions, a large number of registers, register-to-register operations, and the absence of indirect addressing).

Single-chip microcontroller (MCU) is a chip designed to control electronic devices (Figure 5). A typical microcontroller combines the functions of a processor and peripheral devices, and may contain RAM and ROM. Essentially, it is a single-chip computer capable of performing simple tasks. Using a single chip, instead of a whole set, significantly reduces the size, power consumption and cost of devices based on microcontrollers.

Figure 6.5– examples of microcontroller designs

Microcontrollers are the basis for building embedded systems; they can be found in many modern devices, such as telephones, washing machines, etc. Most of the processors produced in the world are microcontrollers.

Today, 8-bit microcontrollers compatible with the i8051 from Intel, PIC microcontrollers from Microchip Technology and AVR from Atmel, sixteen-bit MSP430 from TI, as well as ARM, the architecture of which is developed by ARM and sells licenses to other companies for their production, are popular among developers. .

When designing microcontrollers, there is a balance between size and cost on the one hand, and flexibility and performance on the other. For different applications, the optimal balance of these and other parameters can vary greatly. Therefore, there are a huge number of types of microcontrollers, differing in the architecture of the processor module, the size and type of built-in memory, the set of peripheral devices, the type of case, etc.

A partial list of peripherals that may be present in microcontrollers includes:

Universal digital ports that can be configured for input or output;

Various I/O interfaces such as UART, I²C, SPI, CAN, USB, IEEE 1394, Ethernet;

Analog-to-digital and digital-to-analog converters;

Comparators;

Pulse width modulators;

Timers, built-in clock generator and watchdog timer;

Brushless motor controllers;

Display and keyboard controllers;

Radio frequency receivers and transmitters;

Arrays of built-in flash memory.

Artificial intelligence (AI, English: Artificial intelligence, AI) - the science and technology of creating intelligent machines, especially intelligent computer programs. AI is related to the similar task of using computers to understand human intelligence, but is not necessarily limited to biologically plausible methods.

What is artificial intelligence

Intelligence(from Lat. intellectus - sensation, perception, understanding, understanding, concept, reason), or mind - a quality of the psyche consisting of the ability to adapt to new situations, the ability to learn and remember based on experience, understand and apply abstract concepts and use one’s knowledge for environmental management. Intelligence is the general ability to cognition and solve difficulties, which unites all human cognitive abilities: sensation, perception, memory, representation, thinking, imagination.

In the early 1980s. Computational scientists Barr and Fajgenbaum proposed the following definition of artificial intelligence (AI):


Later, a number of algorithms and software systems began to be classified as AI, the distinctive property of which is that they can solve some problems in the same way as a person thinking about their solution would do.

The main properties of AI are understanding language, learning and the ability to think and, importantly, act.

AI is a complex of related technologies and processes that are developing qualitatively and rapidly, for example:

  • natural language text processing
  • expert systems
  • virtual agents (chatbots and virtual assistants)
  • recommendation systems.

AI Research

  • Main article: Artificial Intelligence Research

Standardization in AI

2018: Development of standards in the field of quantum communications, AI and smart city

On December 6, 2018, the Technical Committee “Cyber-Physical Systems” based on RVC together with the Regional Engineering Center “SafeNet” began developing a set of standards for the markets of the National Technology Initiative (NTI) and the digital economy. By March 2019, it is planned to develop technical standardization documents in the field of quantum communications, and, RVC reported. Read more.

Impact of artificial intelligence

Risk to the development of human civilization

Impact on the economy and business

  • The impact of artificial intelligence technologies on the economy and business

Impact on the labor market

Artificial Intelligence Bias

At the heart of everything that is the practice of AI (machine translation, speech recognition, natural language processing, computer vision, automated driving and much more) is deep learning. It is a subset of machine learning, characterized by the use of neural network models, which can be said to mimic the workings of the brain, so it would be a stretch to classify them as AI. Any neural network model is trained on large data sets, so it acquires some “skills,” but how it uses them remains unclear to its creators, which ultimately becomes one of the most important problems for many deep learning applications. The reason is that such a model works with images formally, without any understanding of what it does. Is such a system AI and can systems built on the basis of machine learning be trusted? The implications of the answer to the last question extend beyond the scientific laboratory. Therefore, media attention to the phenomenon called AI bias has noticeably intensified. It can be translated as “AI bias” or “AI bias”. Read more.

Artificial Intelligence Technology Market

AI market in Russia

Global AI market

Areas of application of AI

The areas of application of AI are quite wide and cover both familiar technologies and emerging new areas that are far from mass application, in other words, this is the entire range of solutions, from vacuum cleaners to space stations. You can divide all their diversity according to the criterion of key points of development.

AI is not a monolithic subject area. Moreover, some technological areas of AI appear as new sub-sectors of the economy and separate entities, while simultaneously serving most areas in the economy.

The development of the use of AI leads to the adaptation of technologies in classical sectors of the economy along the entire value chain and transforms them, leading to the algorithmization of almost all functionality, from logistics to company management.

Using AI for Defense and Military Affairs

Use in education

Using AI in business

AI in the electric power industry

  • At the design level: improved forecasting of generation and demand for energy resources, assessment of the reliability of power generating equipment, automation of increased generation when demand surges.
  • At the production level: optimization of preventive maintenance of equipment, increasing generation efficiency, reducing losses, preventing theft of energy resources.
  • At the promotion level: optimization of pricing depending on the time of day and dynamic billing.
  • At the level of service provision: automatic selection of the most profitable supplier, detailed consumption statistics, automated customer service, optimization of energy consumption taking into account the customer’s habits and behavior.

AI in manufacturing

  • At the design level: increasing the efficiency of new product development, automated supplier assessment and analysis of spare parts requirements.
  • At the production level: improving the process of completing tasks, automating assembly lines, reducing the number of errors, reducing delivery times for raw materials.
  • At the promotion level: forecasting the volume of support and maintenance services, pricing management.
  • At the level of service provision: improving planning of vehicle fleet routes, demand for fleet resources, improving the quality of training of service engineers.

AI in banks

  • Pattern recognition - used incl. to recognize customers in branches and convey specialized offers to them.

AI in transport

  • The auto industry is on the verge of a revolution: 5 challenges of the era of unmanned driving

AI in logistics

AI in brewing

Use of AI in public administration

AI in forensics

  • Pattern recognition - used incl. to identify criminals in public spaces.
  • In May 2018, it became known that the Dutch police were using artificial intelligence to investigate complex crimes.

Law enforcement agencies have begun digitizing more than 1,500 reports and 30 million pages related to unsolved cases, The Next Web reports. Materials from 1988 onwards, in which the crime was not solved for at least three years, and the offender was sentenced to more than 12 years in prison, are transferred into computer format.

Once all the content is digitized, it will be connected to a machine learning system that will analyze the records and decide which cases use the most reliable evidence. This should reduce the time it takes to process cases and solve past and future crimes from several weeks to one day.

Artificial intelligence will categorize cases according to their “solvability” and indicate possible results of DNA testing. The plan is then to automate analysis in other areas of forensics, and perhaps even expand into areas such as social science and testimony.

In addition, as one of the system developers, Jeroen Hammer, said, API functions for partners may be released in the future.


The Dutch police have a special unit that specializes in developing new technologies to solve crimes. It was he who created the AI ​​system for quickly searching for criminals based on evidence.

AI in the judiciary

Developments in the field of artificial intelligence will help radically change the judicial system, making it fairer and free from corruption schemes. This opinion was expressed in the summer of 2017 by Vladimir Krylov, Doctor of Technical Sciences, technical consultant at Artezio.

The scientist believes that existing solutions in the field of AI can be successfully applied in various spheres of the economy and public life. The expert points out that AI is successfully used in medicine, but in the future it can completely change the judicial system.

“Looking at news reports every day about developments in the field of AI, you are only amazed at the inexhaustible imagination and fruitfulness of researchers and developers in this field. Reports on scientific research are constantly interspersed with publications about new products bursting onto the market and reports of amazing results obtained through the use of AI in various fields. If we talk about expected events, accompanied by noticeable hype in the media, in which AI will again become the hero of the news, then I probably won’t risk making technological forecasts. I can imagine that the next event will be the emergence somewhere of an extremely competent court in the form of artificial intelligence, fair and incorruptible. This will happen, apparently, in 2020-2025. And the processes that will take place in this court will lead to unexpected reflections and the desire of many people to transfer to AI most of the processes of managing human society.”

The scientist recognizes the use of artificial intelligence in the judicial system as a “logical step” to develop legislative equality and justice. Machine intelligence is not subject to corruption and emotions, can strictly adhere to the legislative framework and make decisions taking into account many factors, including data that characterize the parties to the dispute. By analogy with the medical field, robot judges can operate with big data from government service repositories. It can be assumed that machine intelligence will be able to quickly process data and take into account significantly more factors than a human judge.

Expert psychologists, however, believe that the absence of an emotional component when considering court cases will negatively affect the quality of the decision. The verdict of a machine court may be too straightforward, not taking into account the importance of people’s feelings and moods.

Painting

In 2015, the Google team tested neural networks to see if they could create images on their own. Then artificial intelligence was trained using a large number of different pictures. However, when the machine was “asked” to depict something on its own, it turned out that it interpreted the world around us in a somewhat strange way. For example, for the task of drawing dumbbells, the developers received an image in which the metal was connected by human hands. This probably happened due to the fact that during the training stage, the analyzed pictures with dumbbells contained hands, and the neural network interpreted this incorrectly.

On February 26, 2016, at a special auction in San Francisco, Google representatives raised about $98 thousand from psychedelic paintings created by artificial intelligence. These funds were donated to charity. One of the most successful pictures of the car is presented below.

A painting painted by Google's artificial intelligence.

Among the most important classes of tasks that have been posed to developers of intelligent systems since the definition of artificial intelligence as a scientific direction (since the mid-50s of the twentieth century), the following should be highlighted: areas of artificial intelligence, which solve problems that are difficult to formalize: proof of theorems, image recognition, machine translation and understanding of human speech, game programs, machine creativity, expert systems. Let's briefly consider their essence.

Directions of artificial intelligence

Proof of theorems. The study of theorem proving techniques played an important role in the development of artificial intelligence. Many informal problems, for example, medical diagnostics, are solved using methodological approaches that were used to automate theorem proving. Finding a proof of a mathematical theorem requires not only deduction from hypotheses, but also the creation of intuitive assumptions about which intermediate statements should be proven for the overall proof of the main theorem.

Image recognition. The use of artificial intelligence for image recognition has made it possible to create practically working systems for identifying graphic objects based on similar features. Any characteristics of objects to be recognized can be considered as features. Features must be invariant to the orientation, size and shape of objects. The alphabet of features is formed by the system developer. The quality of recognition largely depends on how well the alphabet of features has been developed. Recognition consists of a priori obtaining a feature vector for a separate object selected in the image and, then, determining which of the standards of the feature alphabet this vector corresponds to.

Machine translation and human speech understanding. The task of analyzing human speech sentences using a dictionary is a typical task for artificial intelligence systems. To solve this problem, an intermediary language was created that facilitates the comparison of phrases from different languages. Subsequently, this intermediary language turned into a semantic model for representing the meanings of texts to be translated. The evolution of the semantic model led to the creation of a language for the internal representation of knowledge. As a result, modern systems analyze texts and phrases in four main stages: morphological analysis, syntactic, semantic and pragmatic analysis.

Game programs. Most game programs are based on a few basic ideas of artificial intelligence, such as iteration and self-learning. One of the most interesting problems in the field of game programs using artificial intelligence methods is teaching a computer to play chess. It was founded back in the early days of computing, in the late 50s.

In chess, there are certain levels of skill, degrees of quality of play, which can provide clear criteria for assessing the intellectual growth of the system. Therefore, computer chess has been actively studied by scientists from all over the world, and the results of their achievements are used in other intellectual developments that have real practical significance.

In 1974, the world championship among chess programs was held for the first time as part of the regular IFIP (International Federation of Information Processing) congress in Stockholm. The winner of this competition was the chess program “Kaissa”. It was created in Moscow, at the Institute of Management Problems of the USSR Academy of Sciences.

Machine creativity. One of the areas of application of artificial intelligence includes software systems that can independently create music, poetry, stories, articles, diplomas and even dissertations. Today there is a whole class of musical programming languages ​​(for example, the C-Sound language). For various musical tasks, special software was created: sound processing systems, sound synthesis, interactive composition systems, algorithmic composition programs.

Expert systems. Artificial intelligence methods have found application in the creation of automated consulting systems or expert systems. The first expert systems were developed as research tools in the 1960s.

They were artificial intelligence systems specifically designed to solve complex problems in a narrow subject area, such as medical diagnosis of diseases. The classic goal of this direction was initially to create a general-purpose artificial intelligence system that would be able to solve any problem without specific knowledge in the subject area. Due to limited computing resources, this problem turned out to be too complex to solve with an acceptable result.

Commercial implementation of expert systems occurred in the early 1980s, and since then expert systems have become widespread. They are used in business, science, technology, manufacturing, and in many other areas where there is a well-defined subject area. The main meaning of the expression “well-defined” is that a human expert is able to determine the stages of reasoning with the help of which any problem in a given subject area can be solved. This means that similar actions can be performed by a computer program.

Now we can say with confidence that use of artificial intelligence systems opens wide boundaries.

Today, expert systems are one of the most successful applications of artificial intelligence technology. Therefore, we recommend that you familiarize yourself with.


Definition

Artificial intelligence can be defined as a scientific discipline that deals with the automation of intelligent behavior.

Artificial intelligence (AI, English Artificial intelligence, AI) - the science and technology of creating intelligent machines, especially intelligent computer programs. AI is related to the similar task of using computers to understand human intelligence, but is not necessarily limited to biologically plausible methods.

Goals and objectives

The goal of artificial intelligence is to create technical systems capable of solving non-computational problems and performing actions that require processing of meaningful information and are considered the prerogative of the human brain. Such problems include, for example, problems of proving theorems, game problems (say, when playing chess), problems of translation from one language to another, composing music, recognizing visual images, solving complex creative problems of science and social practice. One of the important tasks of artificial intelligence is the creation of intelligent robots capable of autonomously performing operations to achieve goals set by humans and making adjustments to their actions.

Concept structure

“Artificial intelligence” consists of several basic principles and disciplines that are its basis. This is described in more detail in the figure provided below. Image taken from

Below are the basic definitions of the terms used in the picture.

Fuzzy logic and fuzzy set theory - a branch of mathematics that is a generalization of classical logic and set theory. The concept of fuzzy logic was first introduced by Professor Lutfi Zadeh in 1965. In this article, the concept of a set was expanded by the assumption that the membership function of an element in a set can take any value in the interval , and not just 0 or 1. Such sets were called fuzzy. The author also proposed various logical operations on fuzzy sets and proposed the concept of a linguistic variable, the values ​​of which are fuzzy sets.

Artificial neural networks(ANN) - mathematical models, as well as their software or hardware implementations, built on the principle of organization and functioning of biological neural networks - networks of nerve cells of a living organism. This concept arose while studying the processes occurring in the brain and trying to model these processes. The first such attempt was the neural networks of McCulloch and Pitts. Subsequently, after the development of learning algorithms, the resulting models began to be used for practical purposes: in forecasting problems, for pattern recognition, in control problems, etc.

Intelligent Agent- a program that independently performs a task specified by the computer user over long periods of time. Intelligent agents are used to assist the operator or collect information. One example of tasks performed by agents is the task of constantly searching and collecting necessary information on the Internet. Computer viruses, bots, search robots - all of this can also be classified as intelligent agents. Although such agents have a strict algorithm, “intelligence” in this context is understood as the ability to adapt and learn.

Expert system (ES, expert system)- a computer program that can partially replace an expert in resolving a problem situation. Modern ES began to be developed by artificial intelligence researchers in the 1970s and received commercial support in the 1980s. The forerunners of expert systems were proposed in 1832 by S. N. Korsakov, who created mechanical devices, the so-called “intelligent machines,” that made it possible to find solutions to given conditions, for example, to determine the most appropriate medications based on the symptoms of a disease observed in a patient.

Genetic algorithm(English) genetic algorithm) is a heuristic search algorithm used to solve optimization and modeling problems by randomly selecting, combining, and varying the desired parameters using mechanisms reminiscent of biological evolution. It is a type of evolutionary computation. A distinctive feature of the genetic algorithm is the emphasis on the use of the “crossing” operator, which performs the recombination operation of candidate solutions, the role of which is similar to the role of crossing in living nature.

Research models and methods

Symbolic modeling of thought processes

Analyzing the history of AI, we can identify such a broad area as modeling reasoning. For many years, the development of this science has moved precisely along this path, and now it is one of the most developed areas in modern AI. Modeling reasoning involves the creation of symbolic systems, the input of which is a certain problem, and the output requires its solution. As a rule, the proposed problem has already been formalized, that is, translated into mathematical form, but either does not have a solution algorithm, or it is too complex, time-consuming, etc. This area includes: proving theorems, making decisions and game theory, planning and dispatching, forecasting.

Working with Natural Languages

An important direction is natural language processing, within which the analysis of the capabilities of understanding, processing and generating texts in “human” language is carried out. In particular, the problem of machine translation of texts from one language to another has not yet been solved. In the modern world, the development of information retrieval methods plays an important role. By its nature, the original Turing test is related to this direction.

Accumulation and use of knowledge

According to many scientists, an important property of intelligence is the ability to learn. Thus, it comes to the fore knowledge engineering, combining the tasks of obtaining knowledge from simple information, its systematization and use. Advances in this area affect almost every other area of ​​AI research. Here, too, two important subareas cannot be overlooked. The first one is machine learning- concerns the process independent acquisition of knowledge by an intelligent system in the process of its operation. The second is related to the creation expert systems- programs that use specialized knowledge bases to obtain reliable conclusions on any problem.

The field of machine learning includes a large class of problems involving pattern recognition. For example, this is character recognition, handwritten text, speech, text analysis. Many problems are successfully solved using biological modeling (see next paragraph). Particularly worth mentioning computer vision, which is also related to robotics.

Biological modeling of artificial intelligence

It differs from the understanding of artificial intelligence according to John McCarthy, when they proceed from the position that artificial systems are not obliged to repeat in their structure and functioning the structure and processes occurring in it inherent in biological systems; proponents of this approach believe that the phenomena of human behavior, its ability to learning and adaptation is a consequence of the biological structure and the characteristics of its functioning.

This includes several areas. Neural networks are used to solve fuzzy and complex problems such as geometric shape recognition or object clustering. Genetic approach is based on the idea that an algorithm can become more efficient if it borrows better characteristics from other algorithms (“parents”). A relatively new approach, where the task is to create an autonomous program - an agent interacting with the external environment, is called agent approach.

Development prospects

At the moment, the development of artificial intelligence has branched into major sectors, which receive the main attention in the form of material and intellectual investments. Image taken from

Literature

1)"Corporate Knowledge Management and Business Reengineering" Abdikeev, Kiselev

The main resources for the development of companies are increasingly becoming people and the knowledge they possess, intellectual capital and growing professional competence of personnel. Today, new methods of organizational development are required, based on the intersection of humanitarian and engineering approaches, which will allow obtaining a synergistic effect from their interaction. This approach is based on modern advances in information technology, namely cognitive technologies for organizational development. The development of symbiosis is relevant concepts of knowledge management, business process reengineering and the cognitive human component.
For senior managers, business analysts, students of MBA programs in directions "Strategic management","Anti-crisis management", students of economic universities at master's level, graduate students and teachers in the field of corporate management and business reengineering.

2) " Models and methods of artificial intelligence. Application in economics." M.G. Matveev, A.S. Sviridov, N.A. Aleynikova

P The theoretical foundations of artificial intelligence are presented: information aspects, information about binary and fuzzy logic, as well as methods and models current areas of artificial intelligence, expert systems, knowledge engineering, neural networks and genetic algorithms. The issues of practical implementation of intelligent systems are discussed in detail. Many examples are given to illustrate the development and application of the methods and models under consideration. Particular attention is paid to economic problems.

3) "Artificial intelligence and intelligent control systems." I. M. Makarov, V. M. Lokhin, S. V. Manko, M. P. Romanov; editor-in-chief I. M. Makarov

A new, actively developing class of intelligent automatic control systems built on knowledge processing technology is considered from the standpoint of effective application in solving control problems under conditions of uncertainty. The basics of building intelligent systems are outlined.

4) "Artificial intelligence: a modern approach." S. Russell, P. Norvig

The book presents all the modern achievements and outlines the ideas that were formulated in research conducted over the past fifty years, as well as collected over two millennia in the field of knowledge that became the impetus for the development of artificial intelligence as the science of designing rational agents.

List of sources


5) http://ru.wikipedia.org/wiki/%D0%98%D1%81%D0%BA%D1%83%D1%81%D1%81%D1%82%D0%B2%D0%B5%D0 %BD%D0%BD%D1%8B%D0%B9_%D0%B8%D0%BD%D1%82%D0%B5%D0%BB%D0%BB%D0%B5%D0%BA%D1%82

This section is devoted to genetic algorithms. What are genetic algorithms? Essentially, these are optimization algorithms that belong to the class of heuristics. These algorithms eliminate enumeration of all options and significantly reduce computation time. The specificity of the operation of these algorithms comes down to simulating evolutionary processes.

9) http://www.gotai.net/implementations.aspx

Here you will find ideas and ready-made solutions for the use of artificial intelligence and related theories to solve certain practical problems.

10) http://www.gotai.net/documents-logic.aspx

This section contains materials that in one way or another relate to the classical method of modeling AI systems, modeling based on various logical devices. As a rule, these are materials related to expert systems, decision support systems and agent systems.

11) http://khpi-iip.mipk.kharkiv.edu/library/ai/conspai/12.html

AI Development Trends



This article is also available in the following languages: Thai

  • Next

    THANK YOU so much for the very useful information in the article. Everything is presented very clearly. It feels like a lot of work has been done to analyze the operation of the eBay store

    • Thank you and other regular readers of my blog. Without you, I would not be motivated enough to dedicate much time to maintaining this site. My brain is structured this way: I like to dig deep, systematize scattered data, try things that no one has done before or looked at from this angle. It’s a pity that our compatriots have no time for shopping on eBay because of the crisis in Russia. They buy from Aliexpress from China, since goods there are much cheaper (often at the expense of quality). But online auctions eBay, Amazon, ETSY will easily give the Chinese a head start in the range of branded items, vintage items, handmade items and various ethnic goods.

      • Next

        What is valuable in your articles is your personal attitude and analysis of the topic. Don't give up this blog, I come here often. There should be a lot of us like that. Email me I recently received an email with an offer that they would teach me how to trade on Amazon and eBay. And I remembered your detailed articles about these trades. area I re-read everything again and concluded that the courses are a scam. I haven't bought anything on eBay yet. I am not from Russia, but from Kazakhstan (Almaty). But we also don’t need any extra expenses yet. I wish you good luck and stay safe in Asia.

  • It’s also nice that eBay’s attempts to Russify the interface for users from Russia and the CIS countries have begun to bear fruit. After all, the overwhelming majority of citizens of the countries of the former USSR do not have strong knowledge of foreign languages. No more than 5% of the population speak English. There are more among young people. Therefore, at least the interface is in Russian - this is a big help for online shopping on this trading platform. eBay did not follow the path of its Chinese counterpart Aliexpress, where a machine (very clumsy and incomprehensible, sometimes causing laughter) translation of product descriptions is performed. I hope that at a more advanced stage of development of artificial intelligence, high-quality machine translation from any language to any in a matter of seconds will become a reality. So far we have this (the profile of one of the sellers on eBay with a Russian interface, but an English description):
    https://uploads.disquscdn.com/images/7a52c9a89108b922159a4fad35de0ab0bee0c8804b9731f56d8a1dc659655d60.png