The sequence of lectures
- Lecture 1 (Monday, Feb 1, 2010)
-
Download the lecture notes and the codes. You will need to run them during the class. This lecture is part of Section 0.
- Lecture 2 (Wednesday, Feb 3, 2010)
-
Continue with the exercise from the previous lecture. Work through the shell script in the program distribution using man pages to understand what is going on. Discuss what the compiler does, how to use optimization switches (and check whether or not they work). Introduce top down and structured programming, and show how stepwise refinement leads to the programs distributed. This is part of Section 0.
- Lecture 3 (Friday, Feb 5, 2010)
-
Introduce five classes of physics problems which are not solvable except by computational techniques: generic differential equations, generic quantum problems, the gravitational three-body problem, meta-materials and transport in classical plasmas. Download the lecture notes. This lecture has no associated assignments or problems. This is part of Section 0.
- Lecture 4 (Monday, Feb 8, 2010)
-
Introduce the notion of floating point numbers, the peculiarity of arithmetic with them, and errors that flow from their use. Examine the summation of series and the weakening of the Riemann rearrangement theorem. Download the lecture notes. The code associated with this lecture can be downloaded. This is part of Section 1.1.
- Lecture 5 (Wednesday, Feb 10, 2010)
-
Examine possible solutions to the problems set in the previous lecture (refer to the lecture notes): especially to problems of errors and efficiency in the summation of series. Worked out a code to determine the relative precision of any floating point number and found how it is related to the machine precision. Checked the portable codes for determinations of machine integers and machine precision. Examined the propagation of errors in initial conditions for various dynamical systems.
- Lecture 6 (Friday, Feb 12, 2010)
-
Examine linear systems. Discuss well-conditioning of the problems of solving a system of linear equations. Define vector norms and the induced matrix norms. Discuss properties of 1-norms, 2-norms and &infinity;-norms. Find the complexity of dot products on serial and parallel machines: introduce strong and weak scaling. Here are the lecture notes. This is part of Section 1.2.
- Lecture 7 (Monday, Feb 15, 2010)
-
Discuss the Gauss elimination algorithm for the solution of linear equations. Examine hard cases, pivoting and the notion of LU decomposition. Here are the lecture notes. The serial codes associated with these algorithms are available from Numerical Recipes. This is part of Section 1.2.
- Lecture 8 (Wednesday, Feb 17, 2010)
-
Consider the computation of derivatives, and the instabilities associated with this process. Discuss the two different approaches to polynomial interpolation of functions. Examine difference equations and their relationship with differential equations; thereby starting on the subject of numerical solution of differential equations. The lecture notes are here. The algorithms are available in Numerical Recipes. This is part of Section 1.3.
- Lecture 9 (Friday, Feb 19, 2010)
-
Continue the discussion of generic methods for solving differential equations. Program the Euler and implicit-Euler methods. Identify the instabilities in the numerical solution and find the reason for the lack of stability. Study the leap-frog method as a means of improving the stability without cost in time. The lecture notes are here. The algorithms are available in Numerical Recipes. A collection of mathematica notebooks is here. This is part of Section 1.3.
- Lecture 10 (Monday, Feb 22, 2010)
-
Continue the discussion of generic methods for solving ordinary differential equations; compute the complexity of these methods. The lecture notes are here. The algorithms are available in Numerical Recipes. A collection of mathematica notebooks is here. This is part of Section 1.3.
Resume the discussion of parallelisation of linear algebra problems; discuss parallelization of the Gauss elimination algorithm. This is part of Section 1.2.
- Lecture 11 (Friday, Feb 26, 2010)
-
Discuss notions of probability distributions, correlations and randomness. How does one test for randomness? How is the notion of randomness connected to that of computability? How can one develop ciphers (codes) which seemingly destroy all correlations in the original text? The lecture notes are here. This is part of Section 1.4
- Lecture 12 (Wednesday, Mar 03, 2010)
-
How does one deterministically generate apparently random sequences? What kind of tests of randomness do such pseudo-random sequences pass, and which do they fail? Which algorithms give long sequences of pseudo-random numbers, bits, and digits? How does one generate various distributions of pseudo-random numbers? The lecture notes are here. This is part of Section 1.4
If time permits then start the lecture on the eigenvalue problem. The lecture notes are here. A set of Mathematica notebooks illustrates some of the ideas in this lecture. This lecture is part of Section 1.2.
- Lecture 13 (Friday, Mar 05, 2010)
-
Continue the lecture on the eigenvalue problem. Analyze iterative methods of various kinds. For real symmetric matrices examine Jacobi and Householder iterations and the QR factorization for tri-diagonal matrices. The lecture notes are here. A set of Mathematica notebooks illustrates some of the ideas in this lecture. This lecture is part of Section 1.2.
- Lecture 14 (Monday, Mar 22, 2010)
-
Discussed the use of Makefiles to manage large (and small) suites of programs, of shell scripts to organize and explain workflows, and other automation tools. Examined the complexity of lagged Fibonacci random number generators. Discussed again the tools for random number testing.
- Lecture 15 (Wednesday, Mar 24, 2010)
-
Discussed how to measure the timing of basic operations. This leads us to consider how a program is executed, and what program overheads can be. Also involved are questions of statistical inference and modelling of data. (This is a revision of lecture 1).
- Lecture 16 (Friday, Mar 26, 2010)
-
Discuss how to predict timings of algorithms, and how much space will be required for the algorithm. Use of such estimates before writing code in order to decide whether a problem is doable or not. (This is a revision of lecture 3).
- Lecture 17 (Monday, Mar 29, 2010)
-
Continue revision of lecture 3.
- Lecture 18 (Wednesday, Mar 31, 2010)
-
Discussed algorithms for zeroes and minima. Found efficient ways of bracketing zeroes and how they may fail. Once the zeroes are bracketed, found efficient ways of reducing the bracketed interval. Found a connection between zeroes and function minimization (optimization). Evolved methods to bracket minima of functions. High level descriptions of these algorithms were written (suitable for top-down conversion to working programs). Introduced steepest descent and conjugate gradient methods. These will be taken up in detail later.
- Lecture 19 (Monday, Apr 5, 2010)
-
Continued the discussion of algorithms for zeroes and minima. Introduced the Newton-Raphson method for zeroes of functions in one dimension. Introduced analogous methods relaxation to the minima of functions of one variable. Considered the problem of creating sorted tables of functions for later use in interpolation. Discussed sorting and searching and found order N2 algorithms for the former and order N algorithms for the latter. Discussed binary search and found that it is an order log(N) solution for the latter. Left the problem of improved sorting algorithms for later. Introduced hashing as order 1 algorithms for storing and searching. Left the construction of efficient hashing functions for later discussion.
- Lecture 20 (Wednesday, Apr 7, 2010)
-
Implemented the algorithm of the bracketing of zeroes of a function through a top-down specification in Mathematica. Showed how to use object-orientation in Mathematica. This makes it easy to use the functional style of Mathematica to write top-down code. Briefly discussed re-usability and operator overloading.
- Lecture 21 (Wednesday, Apr 21, 2010)
-
We will discuss optimization including the steepest descent and the conjugate gradient algorithms. Here are the lecture notes.
- Lecture 22 (Friday, Apr 23, 2010)
-
This is the concluding lecture of the course. We discuss the conjugate gradient method. We retrace an outline of the course.
- Further contact hours
-
From May 5 till the final submission of the course projects I will be available for discussion by appointment. Please send me mail if you want to discuss your project.
Copyright: Sourendu Gupta; Last modified on 23 Apr, 2010.