Computational Finance Journal

Wednesday, November 17, 2004

ml seminar 11/17/04

Amy greenwald gave us a talk on

Game-Theoretic Learning

No-regret learning algorithms have attracted a great deal of attention in the game-theoretic and machine learning communities. Whereas rational agents act so as to maximize their expected utilities, no-regret learners are boundedly rational agents that act so as to minimize their ``regret''. In this talk, we discuss the behavior of no-regret learning algorithms in repeated games.

Specifically, we introduce a general class of algorithms called no-$\Phi$-regret learning, which includes common variants of no-regret learning such as no-external-regret and no-internal-regret learning. Analogously, we introduce a class of game-theoretic equilibria called $\Phi$-equilibria. We show that no-$\Phi$-regret learning algorithms converge to $\Phi$-equilibria. In particular, no-external-regret learning converges to minimax equilibrium in zero-sum games; and no-internal-regret learning converges to correlated equilibrium in general-sum games. Although our class of no-regret algorithms is quite extensive, no algorithm in this class learns Nash equilibrium.


Friday, November 05, 2004

MK seminar of 11/05 ... online bayesian learning

We read http://www.cis.upenn.edu/~skakade/papers/ml/online_bayes.pdf
in this lecure. This work is by Sham Kakade, a post doc with Michael Kearns at Upenn.