Introduction to Artificial Intelligence 4010-AIN
1. Introduces to the basic data science toolset:
- Jupyter Notebook* for interactive coding
- NumPy, SciPy, and pandas for numerical computation
- Matplotlib and seaborn for data visualization
- Scikit-learn* for machine learning libraries.
2. Basic concepts and vocabulary of machine learning:
- Supervised learning and how it can be applied to regression and classification problems
- K-Nearest Neighbor (KNN) algorithm for classification
3. Principles of core model generalization:
- The difference between over-fitting and under-fitting a model
- Bias-variance tradeoffs
- Finding the optimal training and test data set splits, cross-validation, and model complexity versus error
- Introduction to the linear regression model for supervised learning
4. Concepts such as:
- Learn about cost functions, regularization, feature selection, and hyper-parameters
- Understand more complex statistical optimization algorithms like gradient descent and its application to linear regression
5. Mathematical methods:
- Logistic regression and how it differs from linear regression
- Metrics for classification error and scenarios in which they can be used
6. Probability theory and its applications:
- The basics of probability theory and its application to the Naïve Bayes classifier
- The different types of Naïve Bayes classifiers and how to train a model using this algorithm
7. Classification:
- Support vector machines (SVMs)—a popular algorithm used for classification problems
- Examples to learn SVM similarity to logistic regression
- How to calculate the cost function of SVMs
- Regularization in SVMs and some tips to obtain non-linear classifications with SVMs
8. Advanced supervised learning algorithms, this class covers:
- Decision trees and how to use them for classification problems
- How to identify the best split and the factors for splitting
- Strengths and weaknesses of decision trees
- Regression trees that help with classifying continuous valuesThe concepts of bootstrapping and aggregating (commonly known as “bagging”) to reduce variance
- The Random Forest algorithm that further reduces the correlation seen in bagging models
9. Boosting algorithm that helps reduce variance and bias.
10. Unsupervised learning algorithms and how they can be applied to clustering and dimensionality reduction problems.
11. Algorithms that can be used to achieve a reduction in dimensionality, such as:
- Principal Component Analysis (PCA)
- Multidimensional Scaling (MDS)
Rodzaj przedmiotu
Tryb prowadzenia
Koordynatorzy przedmiotu
Efekty kształcenia
Studenci zapoznają się z podstawami sztucznej inteligencji i uczenia maszynowego, poznają podstawowe narzędzia i metody. Studenci uzyskują więdzę na temat stosowania sztucznej inteligencji do rozwiązywania wybranych projektów.
Kryteria oceniania
Zaliczenie na podstawie samodzielnie opracowanego raportu na zadany temat oraz zaliczenia ćwiczeń (samodzielnej realizacji ćwiczeń w kursie on-line).
Literatura
Intel Academy:https://software.intel.com/en-us/ai-academy/students/kits/machine-learning-501
Więcej informacji
Dodatkowe informacje (np. o kalendarzu rejestracji, prowadzących zajęcia, lokalizacji i terminach zajęć) mogą być dostępne w serwisie USOSweb: