About This Mini-Course
This website arose from a project of Vašek Rozhoň and Tomáš Gavenčiak; the technical material was written by Vašek.
We were thinking about open exposition problems.1 Just as there are open scientific problems where we haven't figured out how Nature works, there are open exposition problems where we haven't figured out the best way to convey our knowledge. We had a joint probability seminar at Charles University where we tried to work out how to teach some topics we find important and under-taught. This text tracks some of what we did in that seminar.
Here are two open problems:
- How to adapt our teaching of computer science theory to convey more about neural networks?
- How can we use the current capabilities of LLMs for teaching?
The first problem motivates the content, the second one motivates the form.
Thanks
First and foremost, this was made possible by my amazing wife Hanička. Huge thanks to Tomáš Gavenčiak (see above), and to my coauthors Claude, Gemini, and GPT that helped massively to create this mini-course. Big thanks to Richard Hladík, Petr Chmel, Vojta Rozhoň, Robert Šámal, Pepa Tkadlec, and others for feedback.
Feedback
I'd love to hear your feedback! Please share it here or reach out to Vašek directly.