Here's some information for RIT students about courses taught by Prof. Lange:
CSCI 631: Foundations of Computer Vision
- Offered: Fall 2025, TBD in future semesters
- Overview: The first 70% or so of the class is spent on classic CV methods using Python, Numpy, and OpenCV. The last 30% or so is spent on machine learning methods with a focus on convolutional networks and PyTorch.
- Prerequisites: strong (Python) programming background. Familiarity with linear algebra and vector calculus are strongly encouraged.
COGS 621: Scientific Computing
-
Offerred: every fall; I teach on even years, Prof. Alex Ororbia teaches on odd years.
-
Description:
This course will introduce students to foundational concepts in scientific computation that are useful for engineering and the mathematical, computational, and physical sciences. Topics will include project management on high-performance computing clusters, optimization of linear and nonlinear systems, mitigating floating point numerical instability, constrained and unconstrained optimization, fitting generative models/simulation-based fitting, bias/variance/cross-validation/regularization, dimensionality reduction and data visualization, numerical differentiation and integration, acceleration via computation graphs/JIT compilation/GPU, automatic differentiation, and pseudo-random number generators and Monte Carlo methods. Implementation of various numerical methods and solvers will be done in Python. Connections to computational modeling of cognition will be made throughout the course as motivating examples for various key concepts and tools.
COGS 637 / COS 537: Advanced Neuro-AI: Probability and Generative Models
-
Offerred: TBA, likely even-year spring semesters starting Spring 2026
-
Description:
This course will cover advanced topics at the intersection of probabilistic modeling, neural computation, and artificial intelligence, with an emphasis on approaches using generative models. Students will develop a deep understanding of how probability theory provides a unified framework for understanding the human brain and intelligence more broadly. Topics include fundamental concepts in probability theory and philosophical foundations of uncertainty, methods for learning and inference in probabilistic models, and probabilistic methods in machine learning like VAEs. The course will cover how probabilistic models have contributed to our understanding of learning, perception, and decision-making. The course will emphasize both theoretical foundations and practical implementations, preparing students to engage with current research in computational neuroscience and AI. Prior exposure to linear algebra, multivariable calculus, machine learning, and probability theory are all recommended.