Lecture on February 06, 2002
0:00:00 - 0:06:20	Course Administration
0:06:20 - 0:07:43	Course Administration: the computer we'll use
0:07:43 - 0:11:10	Course Administration: schedule of projects
0:11:10 - 0:12:15	Course Administration: class topics and calendar
0:12:15 - 0:19:15	Parallel Architectures
0:19:15 - 0:30:00	Parallel Architectures: architecture details
0:30:00 - 0:32:00	Parallel Architectures: more details
0:32:00 - 0:42:45	Parallel Architectures: more details: diagrams of parallel architectures
0:42:45 - 0:47:18	Parallel Architectures: more details: SMPs
0:47:18 - 0:51:25	Parallel Architectures: more details, speeds
0:51:25 - 0:53:00	Parallel Architectures: Moore's law
0:53:00 - 0:59:00	Parallel Architectures: pictures of supercomputers
0:59:00 - 1:04:15	Applications
1:04:15 - 1:07:00	Applications: goals of parallel computing
1:07:00 - 1:09:00	Applications: embarrassingly parallel applications
1:09:00 - 1:15:00	Special Approaches
1:15:00 - 1:23:00	Beowulf

Lecture on February 11, 2002
0:00:00 - 0:05:20	Start, course administration
0:05:20 - 0:08:40	Lattice QCD
0:08:40 - 0:12:10	Lattice QCD: what is QCD and why is it important?
0:12:10 - 0:20:15	Lattice QCD: lattice QCD computations
0:20:15 - 0:28:30	Lattice QCD: clusters for LQCD computation
0:28:30 - 0:29:30	Lattice QCD: conclusion
0:29:30 - 0:36:00	Weather and climate change; weather: history of parallel computing 
0:36:00 - 0:40:40	Weather: weather: parallel computing now and future
0:40:40 - 0:47:00	Weather: climate change
0:47:00 - 1:00:30	Weather: activities at MIT
1:00:30 - 1:09:30	Weather: questions
1:09:30 - 1:22:00	Bad parallel algorithms

Lecture on February 13, 2002
0:00:00 - 0:01:20	Start, lecture outline
0:01:20 - 0:12:00	First homework assignment
0:12:00 - 0:12:40	Connecting to the class machine, beowulf.lcs.mit.edu
0:12:40 - 0:23:30	OpenMP introduction
0:23:30 - 0:55:20	OpenMP details and example
0:55:20 - 0:59:30	OpenMP running on our beowulf
0:59:30 - 1:22:52	MPI

Lecture on February 19, 2002
0:07:00 - 0:15:00	Start                                           
0:15:00 - 0:26:00	Parallel Prefix Algorithm                       
0:26:00 - 0:42:00	Associative Operations
0:42:00 - 0:47:00	The "Myth" of log n
0:47:00 - 0:54:00	Prefix Operation Segmented           
0:54:00 - 1:01:00	Fortran
1:01:00 - 1:10:00	Parallel Prefix Variations
1:10:00 - 1:24:00	PRAM                    

Lecture on February 20, 2002
0:07:00 - 0:08:00	Start(Parallel Computer Architecture I)                     
0:08:00 - 0:14:00	Latency/Bandwidth                                           
0:14:00 - 0:23:00	Latency: the details                                        
0:23:00 - 0:31:00	Node Architecutre                                           
0:31:00 - 0:36:00	Bus as an interconnect network                              
0:36:00 - 0:40:00	Asic White at Lawrence Livermore                            
0:40:00 - 0:48:00	Cross Bar
0:48:00 - 0:55:00	Cache/Cache Coherence                                       
0:55:00 - 1:09:00	CM-2                                                        
1:09:00 - 1:20:00	Hypercube                                                   
1:20:00 - 1:23:00	Routing                                                     
1:23:00 - 1:25:00	Paradiso Cafe Problem                                       

Lecture on February 25, 2002
0:07:00 - 0:14:00	Start                                                   
0:14:00 - 0:27:00	Linear Algebra Libraries                                
0:27:00 - 0:29:00	History, Top 500 ...                                    
0:29:00 - 0:39:00	Optimizing Computation and Memoery Use                  
0:39:00 - 0:43:00	BLAS                                                    
0:43:00 - 0:47:00	Self Adapting Numerical Software                        
0:47:00 - 0:53:00	Software Generation Strategy                            
0:53:00 - 1:01:00	Gaussian Elimination                                    
1:01:00 - 1:04:00	Distributed and Parallel Systems                        
1:04:00 - 1:07:00	Three Basic Linear Algebra Problems                     
1:07:00 - 1:08:00	Results for Parallel Implementation on Intel Delta      
1:08:00 - 1:26:00	More on Gaussian Elimination (Reorganization)           
1:26:00 - 1:28:00	ScaLAPACK and MATLAB*P                                  

Lecture on February 27, 2002
0:07:00 - 0:08:00	Linear Algebra: Start 
0:08:00 - 0:11:00	Linear Algebra: Fundamental Triangle 
0:11:00 - 0:12:00	Linear Algebra: Algorithm & Architecture 
0:12:00 - 0:16:00	Linear Algebra: Architecture ... 
0:16:00 - 0:20:00	Dense Linear Algebra 
0:20:00 - 0:22:00	Linear Algebra: Basic Algorithm Change 
0:22:00 - 0:29:00	Linear Algebra: FMA Instruction 
0:29:00 - 0:30:00	Blocking 
0:30:00 - 0:34:00	Linear Algebra: Recursion 
0:34:00 - 0:35:00	Block Column Major Order 
0:35:00 - 0:39:00	Linear Algebra: Square Block ... 
0:39:00 - 0:42:00	Blocked Mat-Mult is Optimal 
0:42:00 - 0:44:00	Linear Algebra: Matrix Multiplication is Pervasive 
0:44:00 - 0:48:00	Linear Algebra: Recursion
0:48:00 - 0:49:00	Linear Algebra: Standard Full Format 
0:49:00 - 0:51:00	Linear Algebra: Block Hybrid Full Format 
0:51:00 - 0:55:00	Linear Algebra: Blocked Based Algorithms Via LAPACK
0:55:00 - 0:57:00	Linear Algebra: Concise Algorithms Emerge 
0:57:00 - 1:01:00	Linear Algebra: Tree Diagram of Cholesky Algorithm 
1:01:00 - 1:04:00	Linear Algebra: Challenge of Machine Independent Design of Dense Linear Algebra Codes via the BLAS
1:04:00 - 1:06:00	Linear Algebra: Can we exploit this general relationship? 
1:06:00 - 1:07:00	Recursive Data Format 
1:07:00 - 1:11:00	Linear Algebra: Dimension Theory 
1:11:00 - 1:12:00	Linear Algebra: Changes 
1:12:00 - 1:13:00	Linear Algebra: New LAPACK Type Routine 
1:13:00 - 1:20:00	Linear Algebra: Answering questions from audience 

Lecture on March 04, 2002
0:00:00 - 0:01:00	Start: course administration
0:01:00 - 0:15:00	Matlab demo
0:15:00 - 0:32:00	Matlab Operator overloading; sparse matrices; vectorization
0:32:00 - 0:50:00	Matlab*P overview
0:50:00 - 0:57:00	Matlab*P demo
0:57:00 - 1:02:00	FEMLAB overview
1:02:00 - 1:17:00	FEMLAB demo
1:17:00 - 1:25:00	FEMLAB: how it works
1:25:00 - 1:26:00	Project comments

Lecture on March 06, 2002
0:08:00 - 0:09:00	Start 									
0:09:00 - 0:30:00	MATLAB*P Demo 								
0:30:00 - 0:34:00	N-Body Problem
0:34:00 - 0:40:00	N-Body Problem: What is the Computation? 					        
0:40:00 - 0:42:00	N-Body Problem: O(n^2)? Right? 								
0:42:00 - 0:46:00	N-Body Problem: Variations 								
0:46:00 - 0:50:00	N-Body Problem: O(n^2) VS O(nlog(n)) 							
0:50:00 - 0:53:00	N-Body Problem: nlog(n) Type of Computation 						
0:53:00 - 0:57:00	Data Structure: Quad-tree 						
0:57:00 - 1:00:00	Data Structure: Qct-tree 						
1:00:00 - 1:08:00	Barnes-Hut 								
1:08:00 - 1:11:00	Multipole (in 1D, constant potentials) 					
1:11:00 - 1:15:00	Multipole (in 1D, potential= quadratic polynomials)			
1:15:00 - 1:17:00	Multipole (global coordinates vs local coordinates) 			
1:17:00 - 1:26:00	Multipole (Vi(x)=qi/(x-xi)) 						
1:26:00 - 1:28:00	Multipole Series 							

Lecture on March 11, 2002
0:00:00 - 0:02:00	Start: projects and homework schedule
0:02:00 - 0:07:00	Fast multipole: quick review; adding functions
0:07:00 - 0:12:30	Fast multipole: Analogy with finite precision arithmetic
0:12:30 - 0:21:00	Exclusion sum as matrix/vector multiplication
0:21:00 - 0:25:20	Exclusion sum as matrix decomposition
0:25:20 - 0:29:15	Multipole as matrix decomposition
0:29:15 - 0:32:00	Representing functions: Taylor series
0:32:00 - 0:45:30	Interpolating polynomials
0:45:30 - 0:52:00	Matlab: Symbolic example
0:52:00 - 1:04:30	Multipole series (as Taylor series in 1/x)
1:04:30 - 1:10:00	Virtual charges
1:10:00 - 1:17:45	Fitting a polynomial to the example
1:17:45 - 1:18:45	Adding representations of functions
1:18:45 - 1:20:10	Summary of the whole fast multipole algorithm

Lecture on March 13, 2002
0:00:00 - 0:01:43	Start: setting up A/V
0:01:43 - 0:02:25	Tape running.  Project proposal status
0:02:25 - 0:10:00	Multipole continued: object-oriented Matlab; exclude2d
0:10:00 - 0:23:50	Taylor series in Matlab by operator overloading
0:23:50 - 0:25:00	Exclusion sum on Taylor series
0:25:00 - 0:33:15	Multipole series in Matlab 
0:33:15 - 0:41:00	Addition algorithms using binomial coefficients
0:41:00 - 0:45:00	Code for multipole addition; the Pascal matrix; example
0:45:00 - 0:50:00	Multipole objects with symbolic entries (!)
0:50:00 - 0:55:30	Multipole: summing up the multipole sum algorithm
0:55:30 - 1:23:00	Multipole: Overall summary, questions
1:23:00 - 1:25:10	Multipole: History and other applications

Lecture on March 18, 2002
0:00:00 - 0:01:07	Start; project administration
0:01:07 - 0:04:30	Parallel architecture II; caches
0:04:30 - 0:16:00	Cache coherence and how it's enforced
0:16:00 - 0:27:30	Cache coherence: Write invalidate vs write update protocols
0:27:30 - 0:32:40	Cache coherence: Write invalidate with write-back cache
0:32:40 - 0:39:20	Distributed shared memory 
0:39:20 - 0:45:40	The future of cache (according to RAW)
0:45:40 - 0:51:30	A little about the RAW architecture
0:51:30 - 1:00:00	More fun with multipole
1:00:00 - 1:09:30	Experiments with multipole accuracy; sources of rounding error
1:09:30 - 1:21:56	Questions about Beowulf

Lecture on March 20, 2002
0:00:00 - 0:02:00	Start; administration and announcements
0:02:00 - 0:03:25	Beowulf: Building
0:03:25 - 0:07:00	Beowulf: History
0:07:00 - 0:10:00	Beowulf: List of clusters on the web
0:10:00 - 0:17:30	Beowulf: definition, motivation
0:17:30 - 0:37:45	Beowulf: Hardware options
0:37:45 - 0:48:08	Beowulf: Experience with hardware
0:48:08 - 0:57:15	Beowulf: Software options
0:57:15 - 1:00:50	Beowulf: Experience with software
1:00:50 - 1:05:20	Beowulf: Recipe (what really happened)
1:05:20 - 1:08:00	Beowulf: Exercise:  Design a $30,000 Beowulf
1:08:00 - 1:20:50	Questions and discussion

Lecture on April 01, 2002
0:00:00 - 0:04:30	Start, Overview
0:04:30 - 0:10:20	MATLAB Demo
0:10:20 - 0:20:00	Sparse Matrices in Real Life
0:20:00 - 0:21:00	MATLAB Matrices: Design  Principles
0:21:00 - 0:22:30	Data Structures
0:22:30 - 0:34:00	Algorithms (Ax=b)
0:34:00 - 0:35:20	Solving Linear Equations: x=A\b
0:35:20 - 0:41:30	Graphs and Sparse Matrices: Cholesky Factorization
0:41:30 - 0:46:00	Elimination Tree
0:46:00 - 1:01:00	MATLAB Demo: Sparse Matrices and Graphs
1:01:00 - 1:06:10	Fill Reducing Matrix Permutations
1:06:10 - 1:12:15	Matching and Block Triangular Form
1:12:15 - 1:16:46	Complexity of Direct Methods
1:16:46 - 1:25:00	The Landscape of Sparse Ax=b Solvers

Lecture on April 03, 2002
0:00:00 - 0:10:00	Start, related webpages
0:10:00 - 0:15:00	Direct Methods
0:15:00 - 0:17:00	GEPP: Gaussian elimination w/ partial pivoting
0:17:00 - 0:21:30	Symmetric Positive Definite: A=R'R
0:21:30 - 0:26:00	Symbolic Gaussian Elimination
0:26:00 - 0:27:30	Sparse Triangular Solve
0:27:30 - 0:33:00	Left-looking Column LU Factorization
0:33:00 - 0:35:30	Symmetric Supernodes
0:35:30 - 0:38:40	Nonsymmetric Supernodes
0:38:40 - 0:40:50	Sequential SuperLU
0:40:50 - 0:44:40	Column Elimination Tree
0:44:40 - 0:47:00	Shared Memory SuperLU
0:47:00 - 0:48:40	Column Preordering for Sparsity
0:48:40 - 0:53:30	SuperLU dist: GE with static pivoting
0:53:30 - 0:58:00	Row permutation for heavy diagonal
0:58:00 - 1:05:30	Iterative refinement to improve solution
1:05:30 - 1:08:00	Question: preordering for static pivoting
1:08:00 - 1:15:30	Symmetric-pattern multifrontal factorization
1:15:30 - 1:18:40	MUMPS: distributed memory multifrontal
1:18:40 - 1:24:15	Remark on (nonsymmetric) direct methods

Lecture on April 08, 2002
0:00:00 - 0:05:00	Start, Super LU-dist: iterative refinement
0:05:00 - 0:09:45	Convergence analysis of iterative refinement
0:09:45 - 0:14:45	The Landscape of Sparse Ax=b Solvers
0:14:45 - 0:22:00	Conjugate gradient iteration
0:22:00 - 0:31:15	Conjugate gradient: Krylov subspaces
0:31:15 - 0:37:15	Conjugate gradient: Convergence
0:37:15 - 0:47:15	Matlab demo
0:47:15 - 0:59:30	Conjugate gradient: Parallel implementation
0:59:30 - 1:07:00	Preconditioners
1:07:00 - 1:18:30	Incomplete Cholesky factorization
1:18:30 - 1:20:30	Sparse approximation 
1:20:30 - 1:21:30	Support graph preconditioners: example
1:21:30 - 1:22:00	Multigrid
1:22:00 - 1:25:00	Complexity of direct methods

Lecture on April 10, 2002
0:00:00 - 0:03:30	Start, introducing guest speaker, overview
0:03:30 - 0:09:30	Outlines: Embeded Stream Processing
0:09:30 - 0:13:45	Parallel Pipeline
0:13:45 - 0:16:30	Filtering
0:16:30 - 0:19:30	Beamforming and Detection
0:19:30 - 0:21:00	Types of Parallelism
0:21:00 - 0:28:00	Processing Algorithms: FIR overview
0:28:00 - 0:30:00	Processing Algorithms: Beamforming
0:30:00 - 0:33:00	Processing Algorithms: Detection
0:33:00 - 0:38:45	Parallelism Latency and Throughput
0:38:45 - 0:39:15	System Analysis: System Graph
0:39:15 - 0:47:15	System Analysis: Channel Space -> Beam Space
0:47:15 - 0:49:30	System Analysis: Dynamic Load Balancing
0:49:30 - 1:06:00	Software Framework
1:06:00 - 1:13:30	C++ Expression Templates and PETE
1:13:30 - 1:15:30	Performance Results
1:15:30 - 1:19:30	Matlab MPI
1:19:30 - 1:21:00	High Productivity Lauguage Experiments
1:21:00 - 1:25:00	Basic Msg Passing

Lecture on April 17, 2002
0:00:00 - 0:01:50	Domain Decomposition (DD)
0:01:50 - 0:10:00	DD: Overlapping case of DD, example in FEMLAB
0:10:00 - 0:19:30	DD: Summary of overlapping DD
0:19:30 - 0:21:45	DD: History: why did Schwarz do this in 1870?
0:21:45 - 0:30:00	DD: Nonoverlapping DD, example in FEMLAB
0:30:00 - 0:39:00	DD: Normal derivatives on the boundary 
0:39:00 - 0:40:00	DD: Summary of overlapping and nonoverlapping DD
0:40:00 - 0:42:45	Award ceremony 
0:42:45 - 0:46:00	DD: Discretization
0:46:00 - 0:52:30	DD: The overlapping case:  computational issues
0:52:30 - 1:00:40	DD: Jacobi and Gauss-Seidel iterations
1:00:40 - 1:06:00	DD: Overlappping DD as block Jacobi or block Gauss-Seidel
1:06:00 - 1:11:07	DD: Linear algebra formulation of Jacobi and Gauss-Seidel
1:11:07 - 1:20:00	DD: The nonoverlapping case:  computational issues
1:20:00 - 1:23:45	DD: Solving the Schur complement system iteratively

Lecture on April 22, 2002
0:00:00 - 0:03:30	Start, News on the Japanese fastest computer
0:03:30 - 0:05:30	Partitioning: Special Partitioning: One way to slice a problem in half
0:05:30 - 0:12:00	Partitioning: Laplacian of a graph
0:12:00 - 0:18:15	Partitioning: Edge-Node Incidence Matrix
0:18:15 - 0:27:30	Partitioning: Spectral Partitioning
0:27:30 - 0:39:15	Partitioning: Spectral Partitioning: Solve as an eigenvalue problem
0:39:15 - 0:46:30	Partitioning: Geometric Methods
0:46:30 - 0:50:00	Partitioning: Edge separator and  Vertex Separator
0:50:00 - 0:52:45	Partitioning: Theory VS. Practice
0:52:45 - 0:59:00	Partitioning: Need Theoretical Class of Good Graphs
0:59:00 - 1:00:50	Partitioning: Geometric Separator Theorem
1:00:50 - 1:08:00	Partitioning: The Algorithm (Step 1 through 6)
1:08:00 - 1:18:45	Partitioning: Demo again and Radon Point
1:18:45 - 1:20:00	Partitioning: A few tricks
1:20:00 - 1:20:45	Partitioning: ParMETIS

Lecture on April 24, 2002
0:00:00 - 0:12:00	Start, The fastest machine in Japan revisit.
0:12:00 - 0:14:45	SVMs
0:14:45 - 0:21:20	Supervised Learning
0:21:20 - 0:29:00	Linear Classification
0:29:00 - 0:30:30	The Optimization Problem
0:30:30 - 0:37:30	Non-separable Training Sets
0:37:30 - 0:44:30	The Dual Problem
0:44:30 - 0:50:00	Solving the Dual Problem
0:50:00 - 0:54:45	FMSvm Demo
0:54:45 - 1:12:00	MATLAB Example
1:12:00 - 1:20:00	Structural Risk Minimization

Lecture on April 29, 2002
0:00:00 - 0:01:15	FFT: Start, topic: Fast Fourier Transform (FFT)
0:01:15 - 0:05:15	FFT: Definition of discrete Fourier transform
0:05:15 - 0:09:25	FFT: Pictures of FFTs
0:09:25 - 0:23:52	FFT: Example of phone tones
0:23:52 - 0:26:57	FFT: The FFT algorithm
0:26:57 - 0:29:30	FFT: Unshuffle
0:29:30 - 0:31:20	FFT: The matrix recurrence for Fn
0:31:20 - 0:37:00	FFT: Recursive form of the algorithm
0:37:00 - 0:44:45	FFT: Unwrapping the recurrence: bit reversal
0:44:45 - 0:47:45	FFT: Books on the FFT
0:47:45 - 0:53:00	FFT: Performance issues: the butterfly
0:53:00 - 0:57:20	FFT: Parallel issues: communication
0:57:20 - 1:05:30	FFT: notation: putting bars on digits, hypercube FFT
1:05:30 - 1:11:00	FFT: Back to parallel communication issues
1:11:00 - 1:21:07	FFT: Detailed look at a parallel FFT of size 32 on 4 processors
1:21:07 - 1:22:30	FFT: FFTW preview: fastest FFT in the West

Lecture on May 01, 2002
0:00:00 - 0:00:30	Start
0:00:30 - 0:02:11	Cleve Moler introduction
0:02:11 - 0:04:30	History of Matlab
0:04:30 - 0:08:00	Fortran Matlab 
0:08:00 - 0:17:00	Commercial Matlab
0:17:00 - 0:28:45	History of parallel computing
0:28:45 - 0:30:45	Amdahl's law
0:30:45 - 0:40:45	Why is it so hard to program parallel computers?
0:40:45 - 0:47:00	Cornell's Multi-Matlab
0:47:00 - 0:47:45	Nabeel Azar introduction
0:47:45 - 0:49:00	A first look at MultiMatlab
0:49:00 - 0:53:00	What is (and isn't) MultiMatlab?  Why now?
0:53:00 - 0:56:26	Applications
0:56:26 - 0:57:24	Programming patterns
0:57:24 - 1:02:18	Single processor, multiple data (SPMD) style
1:02:18 - 1:07:10	Master/slave style
1:07:10 - 1:30:30	Discussion of implementation considerations

Lecture on May 06, 2002
0:00:00 - 0:02:10	Start, course administration
0:02:10 - 0:03:10	FFTW: Introduction: Steve Johnson, Condensed Matter Physics, MIT
0:03:10 - 0:06:00	FFTW: Steve Johnson: FFTW, the "Fastest Fourier Transform in the West"
0:06:00 - 0:13:12	FFTW: Performance of FFTW
0:13:12 - 0:15:38	FFTW: Why FFTW is fast
0:15:38 - 0:19:20	FFTW: User's view of FFTW
0:19:20 - 0:19:36	FFTW: Outline of rest of talk
0:19:36 - 0:23:00	FFTW: The executor
0:23:00 - 0:34:50	FFTW:    Cooley-Tukey FFT algorithm
0:34:50 - 0:37:25	FFTW:    What a plan looks like
0:37:25 - 0:39:58	FFTW:    Explicit recursion and out-of-cache FFTs
0:39:58 - 0:43:20	FFTW:    Cache-oblivious FFT algorithms
0:43:20 - 0:47:15	FFTW:    Vector recursion
0:47:15 - 0:49:00	FFTW: The planner
0:49:00 - 0:56:30	FFTW:    How the planner works
0:56:30 - 0:59:45	FFTW:    Why use adaptive programs?
0:59:45 - 1:01:05	FFTW:    Self-optimization is easy
1:01:05 - 1:02:38	FFTW: The generator, genfft
1:02:38 - 1:04:45	FFTW:    genfft finds good/new algorithms
1:04:45 - 1:06:30	FFTW:    genfft's compilation strategy
1:06:30 - 1:08:00	FFTW:    DAG creation
1:08:00 - 1:12:40	FFTW:    OCAML Cooley-Tukey FFT
1:12:40 - 1:16:25	FFTW:    Rader's algorithm for prime-size DFT
1:16:25 - 1:20:20	FFTW:    The simplifier
1:20:20 - 1:24:40	FFTW: Conclusions and ongoing work

Lecture on May 08, 2002
0:00:00 - 0:14:45	Start, course administration
0:14:45 - 0:16:30	Smart Matter: Frontiers in Computation
0:16:30 - 0:21:30	Three C's of Computer Science
0:21:30 - 0:23:45	Smart Matter Vision: A New way to Build ? and Systems
0:23:45 - 0:25:30	MEMS: Coupling to the Physical World
0:25:30 - 0:37:30	An "Active Surface"
0:37:30 - 0:43:15	Hierarchical Distributed Control
0:43:15 - 0:47:45	Smart Matter: Coupling
0:47:45 - 0:57:00	Collaborative Sensoring: Acoustic Tracking
0:57:00 - 0:59:00	Smart Matter: Machine Diagnostics
0:59:00 - 1:11:00	PolyBot: A Modular Reconfigurable Robot
1:11:00 - 1:14:45	Trends in Control: MEMS + Distributed Coordination
1:14:45 - 1:30:00	Final Thoughts

Lecture on May 13, 2002
0:00:00 - 0:00:34	Projects: Start, student project presentations:
0:00:34 - 0:20:47	Projects: Oskar Bruening, Jack Holloway, Adnan Sulejmanpasic: Matlab*P visualization
0:20:47 - 0:33:00	Projects: Ken Takusagawa:  Tabulating values of the Zeta function
0:33:00 - 0:45:30	Projects: Nathan Warshauer:  Parallel real-time Strategy AI testing
0:45:30 - 0:59:30	Projects: Ian Chan:  Parallel 2D Kolmogorov-Smirnov statistic
0:59:30 - 1:16:00	Projects: Matt Craighead:  Real-time parallel radiosity
1:16:00 - 1:24:00	Projects: Andrew Wilson, Ashley Predith:  Simulation of oxygen ion ordering as a result of temperature change

Lecture on May 15, 2002
0:00:00 - 0:01:15	Projects: Start, student project presentations
0:01:15 - 0:20:45	Projects: Ahmed Ismail, Cynthia Lo:  Parallel off-lattice Monte Carlo simulations
0:20:45 - 0:32:45	Projects: Andrew Menard:  Parallel clock designer
0:32:45 - 0:53:00	Projects: Amay Champaneria:  Parallelizing condensation for visual tracking
0:53:00 - 1:12:00	Projects: Per-Olof Persson, Sreenivasa Voleti:  Solving very large finite element problems in parallel
1:12:00 - 1:22:30	Projects: Dean Christakos:  Linking Beowulf clusters across the grid