Computational mathematics behind generative AI and machine learning
Plotnikov, Nikita (2024)
Plotnikov, Nikita
2024
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2024060420922
https://urn.fi/URN:NBN:fi:amk-2024060420922
Tiivistelmä
Neural networks have revolutionized various fields of computer science, especially machine learning and artificial intelligence. The purpose of this thesis was to examine the fundamentals of neural networks, overview the structures of neurons and their schematic representations. The mathematical foundations of backpropagation, a key algorithm for training neural networks, with practical implementations in MATLAB were also included.
Moreover, the study explored the use of inverted convolutional neural networks (CNNs) as generative models by analysing the architecture and mathematical underpinnings of inverted CNNs, highlighting their role in generating new data, differences in architecture and optimization techniques.
Additionally, the study involved generative adversarial networks (GANs) and variational autoencoders (VAEs), two well-known approaches in generative modelling. Discussion was provided on the architectural intricacies, training pro- cesses, and mathematical concepts behind GANs and VAEs, providing insights into their practical applications and loss functions.
By combining theoretical discussions with practical implementations, this thesis contributes to a deeper understanding of neural network models and their applications in generative modelling.
Moreover, the study explored the use of inverted convolutional neural networks (CNNs) as generative models by analysing the architecture and mathematical underpinnings of inverted CNNs, highlighting their role in generating new data, differences in architecture and optimization techniques.
Additionally, the study involved generative adversarial networks (GANs) and variational autoencoders (VAEs), two well-known approaches in generative modelling. Discussion was provided on the architectural intricacies, training pro- cesses, and mathematical concepts behind GANs and VAEs, providing insights into their practical applications and loss functions.
By combining theoretical discussions with practical implementations, this thesis contributes to a deeper understanding of neural network models and their applications in generative modelling.