Erinevus lehekülje "Machine learning" redaktsioonide vahel

Allikas: Kursused
Mine navigeerimisribale Mine otsikasti
5. rida: 5. rida:
 
ITI8565: Machine learning
 
ITI8565: Machine learning
  
Taught by: Kairit Sirts
+
Taught by: Sven Nõmm
  
 
EAP: 6.0
 
EAP: 6.0
  
Time and place: Fridays
+
Time and place: Thursdays
   Lectures: 16:00-17:30  X-406
+
   Lectures: 14:00-15:30  ICT-A2
   Labs: 17:45-19:15  X-412
+
   Labs: 17:45-19:15  ICT-405
  
 
   Consultation:
 
   Consultation:
   30.05.2014 at 15:00 in ICT-411
+
   by appointment
  
  Exams:
+
  06.06.2014 at 16:00 in ICT-411
 
  13.06.2014 at 16:00 in ICT-411
 
  
  Additional exam:
+
Additional information: sven.nomm@ttu.ee
  19.06.2014 at 18:00 in ICT-411
 
 
 
Additional information: sirts@ioc.ee, juhan.ernits@ttu.ee
 
 
 
Skype: kairit.sirts
 
  
 
The course is organised by [http://cs.ttu.ee the Department of Comptuer Science]. The course is supported by [http://studyitin.ee/ IT Academy].
 
The course is organised by [http://cs.ttu.ee the Department of Comptuer Science]. The course is supported by [http://studyitin.ee/ IT Academy].
 
Students should also subscribe to [http://lists.ttu.ee/mailman/listinfo/machine-learning machine learning list].
 
This is used to spread information about the course in this semester as well as any other machine learning related event happening in TUT (also in future).
 
 
Homework rankings based on results (just for fun): [[Meedia:Ranking.pdf|Ranking]] <br \>
 
 
No lecture on 18.04.2014. Instead of that, we will have a joint session for solving homework problems on Thursday 17.04 starting from 14:00 in ICT-411.
 
 
== Assignments ==
 
[[Media:Hw1.pdf|First homework]] about decision trees is open in moodle. For submitting you have to register [https://moodle.e-ope.ee/course/view.php?id=6504|target='_new' to the course]
 
 
[[Media:Hw2.pdf|Second homework]] about KNN and K-means is open in moodle.
 
 
[[Media:Hw3.pdf|Third homework]] about neural networks is open in moodle.
 
 
[https://www.dropbox.com/sh/50sioj7j8z7rwfn/s_iLJ6VlA0 Data] for the third homework
 
 
[[Media:Hw4.pdf|Fourth homework]] about linear and logistic regression is open in moodle.
 
 
[[Media:Sbp.txt|Data]] for the fourth homework
 
 
[[Media:Hw5.pdf|Fifth homework]] about naive Bayes is open in moodle.
 
 
[[Media:Spambase.txt|Data]] for the fifth homework
 
 
[[Media:Hw6.pdf|Sixth homework]] about support vector machines is open in moodle.
 
  
 
== Lecture 1: Introduction, decision trees ==
 
== Lecture 1: Introduction, decision trees ==
61. rida: 28. rida:
  
 
[http://ciml.info/dl/v0_8/ciml-v0_8-ch01.pdf Reading] - contains also the full algorithm for decision tree learning with divide-and-conquer strategy.
 
[http://ciml.info/dl/v0_8/ciml-v0_8-ch01.pdf Reading] - contains also the full algorithm for decision tree learning with divide-and-conquer strategy.
 
== Lecture 2: K nearest neighbours ==
 
[[Meedia:Lecture2.pdf|Slides]]
 
 
[http://ciml.info/dl/v0_8/ciml-v0_8-ch02.pdf Reading]
 
 
== Lecture 3: K-means clustering, MLE principle ==
 
[[Meedia:Lecture3.pdf|Slides]]
 
 
[http://ciml.info/dl/v0_8/ciml-v0_8-ch02.pdf Reading I]
 
 
[http://ciml.info/dl/v0_8/ciml-v0_8-ch13.pdf Reading II]
 
 
== Lecture 4: Gaussian Mixture Model, EM algorithm ==
 
[[Meedia:Lecture4.pdf|Slides]]
 
 
[http://ciml.info/dl/v0_8/ciml-v0_8-ch14.pdf Reading]
 
 
 
== Lecture 5: History of neural networks, perceptron ==
 
[[Meedia:Lecture5.pdf|Slides]]
 
 
[http://ciml.info/dl/v0_8/ciml-v0_8-ch03.pdf Reading]
 
 
== Lecture 6: Artificial neural networks ==
 
[[Meedia:Lecture6.pdf|Slides]]
 
 
[[Meedia:Bp_math.pdf|Backpropagation notes]]
 
 
[http://ciml.info/dl/v0_8/ciml-v0_8-ch08.pdf Reading]
 
 
 
== Lecture 7: Linear regresssion ==
 
[[Meedia:Lecture7.pdf|Slides]]
 
 
== Lecture 8: Logistic regresssion ==
 
[[Meedia:Lecture8.pdf|Slides]]
 
 
== Lecture 9: Naive Bayes, maximum entropy model ==
 
[[Meedia:Lecture9.pdf|Slides]]
 
 
[http://see.stanford.edu/materials/aimlcs229/cs229-notes2.pdf Reading about Naive Bayes, section 2, lecture notes by Andrew Ng]
 
 
[http://www.cs.jhu.edu/~jason/tutorials/loglin/#1 Tutorial about log-linear modeling by Jason Eisner]
 
 
== Lecture 10: Sequence modeling ==
 
[[Meedia:Lecture10.pdf|Slides]]
 
 
[http://www.cs.ubc.ca/~murphyk/Bayes/rabiner.pdf Reading] The classic paper on HMM-s
 
 
== Lecture 11: Dimensionality reduction - PCA ==
 
 
[http://www.cs.princeton.edu/picasso/mats/PCA-Tutorial-Intuition_jp.pdf Tutorial on PCA]
 
 
== Lecture 12: Support vector machines ==
 
[[Meedia:Lecture12.pdf|Slides]]
 
 
[http://see.stanford.edu/materials/aimlcs229/cs229-notes3.pdf Reading, sections 1-4, lecture notes by Andrew Ng]
 
 
== Lecture 13: SVM and kernels ==
 
[[Meedia:Lecture13.pdf|Slides]]
 
 
[http://see.stanford.edu/materials/aimlcs229/cs229-notes3.pdf Reading, sections 5-8, lecture notes by Andrew Ng]
 
 
== Lecture 14: Kernelized methods, Gaussian processes ==
 
[[Meedia:Lecture14.pdf|Slides]]
 
 
== Lecture 15: Process mining. The alpha algorithm ==
 
[http://courses.cs.ttu.ee/w/images/e/e3/Masinõpe15.pdf Slides]
 
 
[http://www.processmining.org/_media/processminingbook/process_mining_chapter_05_process_discovery.pdf The alpha algorithm slides from Processmining.org]
 
 
== Additional links ==
 
[[Meedia:Latex_example.pdf|Latex example]]
 
 
[[Meedia:Latex_example.tex|Latex example code]]
 
 
[http://www.maths.tcd.ie/~dwilkins/LaTeXPrimer/ Latex tutorial]
 
 
[http://arkitus.com/patterns-for-research-in-machine-learning/ Tips for scientific programming]
 
 
== Exam ==
 
[[Meedia:SampleExam2.pdf|Example exam questions]]
 

Redaktsioon: 6. veebruar 2015, kell 11:56

Previous years: 2014

Spring 2013/2014

ITI8565: Machine learning

Taught by: Sven Nõmm

EAP: 6.0

Time and place: Thursdays

 Lectures: 14:00-15:30  ICT-A2
 Labs: 17:45-19:15  ICT-405
 Consultation:
 by appointment


Additional information: sven.nomm@ttu.ee

The course is organised by the Department of Comptuer Science. The course is supported by IT Academy.

Lecture 1: Introduction, decision trees

Slides

Example made in class - When to play tennis?

Reading - contains also the full algorithm for decision tree learning with divide-and-conquer strategy.