Dear all,
On Friday April 22nd, we will have two sessions, one with
with Jon Williamson and one with Rohit Parikh.
Everyone is cordially invited!
---- First Session ----
Speaker: Jon Williamson (University of Kent)
Date and Time: Friday, April 22nd 2016, 13:00-14:30
Venue: Science Park 107, Room F1.15
Title: Inductive Logic for Automated Decision Making.
Abstract. According to Bayesian decision theory, one\'s acts should
maximise expected utility. To calculate expected utility one needs not
only the utility of each act in each possible scenario but also the
probabilities of the various scenarios. It is the job of an inductive
logic to determine these probabilities, given the evidence to hand.
The most natural inductive logic, classical inductive logic,
attributable to Wittgenstein, was dismissed by Carnap due to its
apparent inability to capture the phenomenon of learning from
experience. I argue that Carnap was too hasty to dismiss this logic:
classical inductive logic can be rehabilitated, and the problem of
learning from experience overcome, by appealing to the principles of
objective Bayesianism. I then discuss the practical question of how to
calculate the required probabilities and show that the machinery of
probabilistic networks can be fruitfully applied here. This culminates
in an objective Bayesian decision theory that has a realistic prospect
of automation.
---- Second Session ----
Speaker: Rohit Parikh (City University of New York, Brooklyn College and CUNY Graduate Center)
Date and Time: Friday, April 22nd 2016, 16:00-17:30
Venue: ILLC Seminar Room F1.15, Science Park 107
Title: An Epistemic Generalization of Rationalizability.
Abstract. Savage showed us how to infer an agent’s subjective
probabilities and utilities from the bets which the agent accepts or
rejects. But in a game theoretic situation an agent’s beliefs are not
just about the world but also about the probable actions of other
agents which will depend on their, beliefs and utilities. Moreover, it
is unlikely that agents know the precise subjective probabilities or
cardinal utilities of other agents. An agent is more likely to know
something about the preferences of other agents and something about
their beliefs. In view of this, the agent is unlikely to to have a
precise best action which we can predict, but is more likely to have a
set of \"not so good\" actions which the agent will not perform.
Ann may know that Bob prefers chocolate to vanilla to strawberry. She
is unlikely to know whether Bob will prefer vanilla ice cream or a
50-50 chance of chocolate and strawberry. So Ann’s actions and her
beliefs need to be understood in the presence of such partial
ignorance. We propose a theory which will let us decide when Ann is
being irrational, based on our partial knowledge of her beliefs and
preferences, and assuming that Ann is, rational, how to infer her
beliefs and preferences from her actions.
Our principal tool is a generalization of rational behavior in the
context of ordinal utilities and partial knowledge of the game which
the agents are playing.
----
Hope to see you there!
The LIRa team