Department of Information and Computing Sciences

Departement Informatica Onderwijs
Bachelor Informatica Informatiekunde Kunstmatige intelligentie Master Computing Science Game&Media Technology Artifical Intelligence Business Informatics

Onderwijs Informatica en Informatiekunde

Vak-informatie Informatica en Informatiekunde

Big data

Website:website containing additional information
Course code:INFOMBD
Credits:7.5 ECTS
Period:periode 3 (week 6 t/m 15, dwz 6-2-2017 t/m 13-4-2017; herkansing week 27)
Timeslot:D
Participants:up till now 69 subscriptions
Schedule:Official schedule representation can be found in Osiris
Teachers:
formgrouptimeweekroomteacher
college   wo 13.15-15.006-14 ANDRO-C101 Arno Siebes
Ad Feelders
   
vr 13.15-15.006-14 RUPPERT-D
Contents:

Big Data is as much a buzz word as an apt description of a real problem: the amount of data generated per day is growing faster than our processing abilities. Hence the need for algorithms and data structures which allow us, e.g., to store, retrieve and analyze vast amounts of widely varied data that streams in at high velocity.

In this course we will limit ourselves to data mining aspects of the Big Data problem, more specifically to the problem of classification in a Big Data setting. To make algorithms viable for huge amounts of data they should have low complexity, in fact it is easy to think of scenarios where only sublinear algorithms are practical. That is, algorithms that see only a (vanishingly small) part of the data: algorithms that only sample the data.

We start by studying PAC learning, where we study tight bounds to learn (simple) concepts almost always almost correctly from a sample of the data; both in the clean (no noise) and in the agnostic (allowing noise) case. The concepts we study may appear to allow only for very simple – hence, often weak – classifiers. However, the boosting theorem shows that they can represent whatever can be represented by strong classifiers.

PAC learning algorithms are based on the assumption that a data set represents only one such concept, which obviously isn’t true for almost any real data set. So, next we turn to frequent pattern mining, geared to mine all concepts from a data set. After introducing basic algorithms to compute frequent patterns, we will look at two ways to speed them up. Firstly by sampling using the theoretical concepts from the PAC learning framework such as the VC dimension and Rademacher Complexity, and, secondly, by parallelization using Map/Reduce.

A problem of frequent pattern mining is the pattern explosion. While this is the topic of another course – Pattern Set Mining – we will end this course with one approach to battle this problem, viz., by using MDL. Firstly we will introduce Algorithmic Information Theory and subsequently we will use it for this concrete problem. Finally we will show how this approach can be used, again, to construct a classifier.

Literature:

The slides completed by your own lecture notes are in principle all you need. Background reading material is, however, also available:

  • For the first part of the course, we largely follow the first 8 chapters of the book Understanding Machine Learning, From Theory to Algorithms
    by Shai Shalev-Shwartz and Shai Ben-David
    • you can legally download the book from a webpage of the first author
    • you can, of course, also buy this book.
    • It is a good book, so if you want to become a data scientist, buying it is a sensible choice
  • For the later parts of the course we will point to the papers that the lectures are based on. You can download these papers (again legally) from anywhere in the UU network
Minimum effort to qualify for 2nd chance exam:Om aan de aanvullende toets te mogen meedoen moet de oorspronkelijke uitslag minstens 4 zijn.
wijzigen?