Aktualności
Ocena zajęć
Poniedziałek, 16.06.2025 · 21:59
Ocena zajęć jest już aktywna i pozostanie otwarta do **8 lipca**. Ocena zajęć pozwala nam poprawiać jakość studiów, a oprócz tego daje Wam 24h bonusu do czasu otwarcia zapisów przez dwa kolejne semestry! Zachęcam do wypełniania pytań otwartych w ocenie zajęć i do zadbania o to, by zgłaszane przez was opinie były jasno opisane. Dzięki temu będziemy mogli doskonalić naszą ofertę i zajęcia.
----
Class evaluation is active now and it will remain open until **8 July**. Taking part in class evaluation lets us improve the quality of courses, and besides that, it gives you a 24-hour bonus to the opening of enrollment times for the next two semesters! I encourage you to fill in the open questions in the class evaluation and to ensure that the feedback you submit is clearly described. This will help us to improve our offer and courses.
Wykłady popularnonaukowe w środę 11.06, sala 141
Wtorek, 10.06.2025 · 12:53
Today and tomorrow our institute is being visited by students from a friendly university in Liverpool. One of the attractions will be popular science lectures by **Igor Jakus** and **Klaudia Balcer**, to which we invite all interested students.
śr. 11.06, 12:15, sala 141 Igor Jakus **Behind the Scenes of PyTorch: Building Autograd from Scratch**
Have you ever wondered how PyTorch works? In this short session, we’ll take a peek under the hood of one of the most popular deep learning frameworks.
I’ll walk you through a minimal implementation of a PyTorch-like autograd system written in plain Python. You’ll see how tracking operations, building a computation graph, and performing backpropagation can be surprisingly simple. This talk aims to replace the black box with a clear picture – so next time you train a model, you’ll know exactly what’s happening behind the scenes.
śr. 11.06, 14:15, sala 141 Klaudia Balcer **What is hidden in the Black-box? Explainable AI**
Although Artificial Intelligence is tackling an increasing number of tasks and becoming increasingly effective, human understanding of these models, and therefore trust, remains limited.
In this talk, Klaudia will present how XAI helps to create good quality and explainable models through successive stages of a data science project, focusing on model development and post-hoc explanations. Topics from classical game theory, such as SHAP, and gradient-based methods, such as GradCAM, to modern techniques of explaining models for temporal data or graph neural networks, such as GNNExplainer, will be covered.
Klaudia will discuss the usefulness of these methods in various domains, such as medicine, finance, or e-commerce.