Neural computing

Course Code: MI108.3 • Study year: I • Academic Year: 2024-2025
Domain: Computer Science - Masters • Field of study: Advanced programming and databases
Type of course: Elective (1 of 3)
Language of instruction: Romanian
Erasmus Language of instruction: English
Name of lecturer: Adriana Bîrluțiu
Seminar tutor: Adriana Bîrluțiu
Form of education Full-time
Form of instruction: Class
Number of teaching hours per semester: 42
Number of teaching hours per week: 3
Semester: Summer
Form of receiving a credit for a course: Grade
Number of ECTS credits allocated 6

Course aims:

Develop the students’ability to design software that is dedicated for solving the difficult problems by exploiting neural computing algorithms.
The course aims to acquire the theoretical and applied knowledge regarding principles of neural calculus
The course aims to acquire the theoretical and applied knowledge regarding design and implementation of neural networks

Course Entry Requirements:

Artificial intelligence - basic notions

Course contents:

1.     Introduction to neural network theory. Natural Neuron vs Artificial Neuron. Models of neurons and artificial neural networks. Learning in neural networks. Implementations, applications, trends.
2.    Feed-forward neural networks. The Perceptron model.
3.    Multi-feed feed-forward architectures. Limitations of single-level network architectures. Multi-level architectures with feedforward connections.
4.    Radial base function (RBF) networks. Architecture and functioning. Representation capacity of RBF networks. Learning algorithms.
5.    Recurring neural networks for associative memories. Associative memories. A mathematical model of the recurrent neural network. Hopfield model and data storage algorithms (Hebb rule, Diederich-Opper algorithm).
6.    Combinatorial optimization problems. Simulated annealing algorithm. Stochastic machines: Boltzmann machines, Helmholtz machines. Applicability and limitations.
7.    Time series processing. Preprocessing. Networks with time windows. The Elman Model.
8.    Cellular networks. Architecture. Operation. Applications in image processing.
9.    Self-organizing neural networks. Unsupervised learning. Biological basics. Self-organizing neural networks (KOHONEN).
10.    Neuro-symbolic hybrid architectures. Extracting rules from neural networks. Expert systems combined with neural networks.
11.    Neuro-fuzzy hybrid architectures. Neuro-genetic hybrid architectures. Genetic algorithms in optimizing neural network topology.
12.    Applications of neural networks. Fields of applicability, examples of known neural systems, successfully used in real problems.

Teaching methods:

Lecture, conversation, exemplification

Learning outcomes:

The use of computer tools in an interdisciplinary context

- The description of concepts, theories and models used in the application field.

-The identification and explanation of base computer models that are suitable for the application domain.

- The use of computer and mathematical models and tools to solve specific problems in the application field.

- Data and model analysis.

- The development of software components of interdisciplinary projects.

Learning outcomes verification and assessment criteria:

Oral presentation 50% ; Continuous assessment Laboratory activities portfolio 50%

Recommended reading:

Bishop, Christopher, Pattern Recognition and Machine Learning, Springer-Verlag, New York, 2006, 150.
Haykin, S, Neural Networks: A Comprehensive Foundation, Prentice Hall, New York, 1999, 200.
Rashid, Tariq, Make Your Own Neural Network - A Gentle Journey through the Mathematics of Neural Networks, and Making Your Own Using the Python Computer Language, Prentice Hall, New York, 2016, 150.