IF5OT7 - 6 ECTS - 3rd Edition
โA scientist can discover a new star, but he cannot make one. He would have to ask an engineer to do it for him.โ โ Gordon Lindsay Glegg
The course aims at giving an overview of Data Engineering foundational concepts. It is tailored for 1st and 2nd year Msc students and PhDs who would like to strengthen their fundamental understanding of Data Engineering, i.e., Data Modelling, Collection, and Wrangling.
The course was originally held in different courses at Politecnico di Milano (๐ฎ๐น) by Emanuele Della Valle and Marco Brambilla. The first issue as a unified journey into the data world dates back to 2020 at the University of Tartu (๐ช๐ช) by Riccardo Tommasini, and where it is still held by professor Ahmed Awad (course LTAT.02.007). At the same time, the course was adopted by INSA Lyon (๐ซ๐ท) as OT7 (2022) and PLD โDataโ
Students of this course will obtain two sets of skills: one that is deeply technical and necessarily technologically biased, one that is more abstract (soft) yet essential to build the professional figure that fits in a data team.
The course follows a challenge-based learning approach. In particular, each system is approached independently with a uniform interface (python API). The studentโs main task is to build pipelines managed by Apache Airflow that integrates 3/5 of the presented systems. The course schedule does not include an explanation of how such integration should be done. Indeed, it is up for each group to figure out how to do it, if developing a custom operator or scheduling scripts or more. The students are then encouraged to discuss their approach and present their limitations.
Creativity Bonus: the students will be encouraged to come up with their information needs about a domain of their choice.
After a general overview of the data lifecycle, the course deeps into an (opinionated) view of the modern Data Warehouse. To this extent, the will touch basic notions of Data Wrangling, in particular Cleansing and Transformation.
At the core of the course learning outcome there is the ability to design, build, and maintain a data pipelines.
Technological Choice (Year 2024/25): Apache Airlfow
Regarding the technological stack of the course, modulo the choice of the lecturer, the following systems are encouraged.
The interaction with the systems above is via Python using Jupyter notebooks. The environment is powered by [[Docker]] and orchestrated using [[Docker Compose]].
๐๏ธ = Practice ๐ = Lecture
Topic | Day | Date | From | To | Material | Video | Comment | |
---|---|---|---|---|---|---|---|---|
Intro |
๐ |
Tuesday |
2024/09/24 |
10:00 |
12:00 |
|||
Docker |
๐ |
Wednesday |
2024/09/25 |
14:00 |
18:00 |
|||
Data Modeling I |
๐ |
Monday |
2024/10/07 |
10:00 |
12:00 |
|||
Apache Workflow Intro + TD |
๐ |
Monday |
2024/10/07 |
14:00 |
18:00 |
|||
Data Modeling II |
๐ |
Tuesday |
2024/10/08 |
14:00 |
16:00 |
|||
Data Wrangling |
๐ |
Wednesday |
2024/10/09 |
14:00 |
18:00 |
|||
Data Storage |
๐ |
Monday |
2024/10/21 |
10:00 |
12:00 |
|||
Document Stores |
๐ |
Monday |
2024/10/21 |
14:00 |
18:00 |
|||
Project In Class |
๐ |
Wednesday |
2024/10/23 |
14:00 |
18:00 |
LAST DATE TO REGISTER |
||
Graph DBs |
๐ |
Monday |
2024/11/04 |
14:00 |
18:00 |
|||
Key-Value Stores |
๐ |
Wednesday |
2024/11/06 |
14:00 |
18:00 |
|||
Exam |
Exam |
Wednesday |
2024/11/27 |
10:00 |
12:00 |
TODO |
||
Project In Class |
๐ |
Wednesday |
2024/11/27 |
14:00 |
18:00 |
|||
External Talk (TBA) |
Exam |
Monday |
2024/12/02 |
10:00 |
12:00 |
|||
Poster Section |
Exam |
Monday |
2024/12/02 |
14:00 |
18:00 |
NB: The course schedule can be subject to changes!
The course exam is done in class in the data indicated in the schedule. It will last around 1h. It includes 2-3 bigger topics (data modelling, pipeline design, etc) and 3-5 smaller topics (simple questions about DE in general, eg., what is an ETL?).
You are allowed to bring an 4A paper with all the notes you can fit in it, only requirement is that the notes is Handwritten
The goal of the project is implementing a few full stack data pipelines that go collect raw data, clean them, transform them, and make them accessible via to simple visualisations.
You should identify a domain, two different data sources, and formulate 2-3 questions in natural language that you would like to answer. Such question will be necessary for the data modelling effort (i.e., creating a database).
โDifferentโ means different formats, access patterns, frequency of update, etc. In practice, you have to justify your choice!
If you cannot identify a domain yourself, you will be appointed assigned on by the curt teacher.
The final frontend can be implemented using Jupyter notebook, Grafana, StreamLit, or any software of preference to showcase the results. THIS IS NOT PART OF THE EVALUATION!
The project MUST include all the three areas discussed in class (see figure above), i.e., ingestion of (raw) data, staging zone for cleaned and enriched data, and a curated zone for production data analytics. To connect the various zones, you should implement the necessary data pipelines using Apache Airflow. Any alternative should be approved by the teacher. The minimum number of pipelines is 3:
The figure below is meant to depict the structure of the project using the meme dataset as an example.
Project grading 0-10 + 5 report (accuracy, using proper terminology, etc) + 5 for the poster.
The project grading follows a portfolio-based approach, once you achieved the minimum level (described above), you can start enchancing and collect extra points. How?
Project Registration Form (courtesy of Kevin Kanaan)