Dear all,
We will have our next LIRa session on Thursday, March 4th. Our speaker is Frederik Van De Putte. You can find the details of the talk below. We will use our recurring zoom link: https://uva-live.zoom.us/j/92907704256?pwd=anY3WkFmQVhLZGhjT2JXMlhjQVl1dz09 (Meeting ID: 929 0770 4256, Passcode: 036024).
Speaker: Frederik Van De Putte
Date and Time: Thursday, March 4th 2021, 16:30-18:00, Amsterdam
time.
Venue: online.
Title: The Problem of No Hands: Responsibility Voids in Collective
Decisions.
Abstract. The problem of no hands concerns the existence of so-called
responsibility voids: cases where a group makes a certain decision,
yet no individual member of the group can be held responsible for this
decision. Criteria-based collective decision procedures play a central
role in philosophical debates on responsibility voids. In particular,
the well-known discursive dilemma has been used to argue for the
existence of these voids. But there is no consensus: others argue that
no such voids exist in the discursive dilemma under the assumption
that casting an untruthful opinion is eligible. We argue that, under
this assumption, the procedure used in the discursive dilemma is
indeed immune to responsibility voids, yet such voids can still arise
for other criteria-based procedures. We provide two general
characterizations of the conditions under which criteria-based
collective decision procedures are immune to these voids. Our general
characterizations are used to prove that responsibility voids are
ruled out by criteria-based procedures involving an atomistic or
monotonic decision function. In addition, we show that our results
imply various other insights concerning the logic of responsibility
voids.
This is joint work with Hein Duijf.
Hope to see you there!
The LIRa team
Dear all,
We will have our next LIRa session tomorrow. We will use our recurring zoom link: https://uva-live.zoom.us/j/92907704256?pwd=anY3WkFmQVhLZGhjT2JXMlhjQVl1dz09 (Meeting ID: 929 0770 4256, Passcode: 036024). You can find the details below.
Speaker: Sven Rosenkranz
Date and Time: Thursday, February 25th 2021, 16:30-18:00,
Amsterdam time.
Venue: online.
Title: To be in no position to know to be in no position to know:
methods, safety, and luminosity.
Abstract. To the best of my knowledge, no epistemic logician seriously
entertains the thought that negative introspection holds: clearly,
some cases of ignorance go unnoticed. By contrast, many epistemic
logicians seem unfazed by extant arguments against positive
introspection and continue working with logics at least as strong as
S4. There are arguments against positive introspection that rely on
the fact that knowledge implies belief. Such arguments can be defused
by recasting positive introspection in terms of the factive notion of
being in a position to know, where to be in a position to know p, one
needn’t believe p. Knowledge requires safe belief, and being in a
position to know accordingly requires being in a position to safely
believe. There are powerful safety- based arguments against positive
introspection, even in its revised formulation. I will not here
rehearse these arguments. Instead, I will argue for the claim that the
safety requirement poses no threat to ¬K¬Kp → K¬K¬Kp, where
‘K’ is short for ‘One is in a position to know that’. If
¬K¬Kp → K¬K¬Kp holds, ¬K¬Kp encodes a luminous condition. The
lead idea driving the argument is that methods for telling if ¬Kp
holds are best seen as being functionally dependent on the best
methods for telling if p holds, and as being lopsided in that they are
not, at the same time, methods for telling if Kp holds. Accordingly, I
must say a lot about methods first. Elsewhere I have argued that
¬K¬Kp is necessary and sufficient for one’s having propositional
justification for p. If ¬K¬Kp → K¬K¬Kp can be upheld, then this
implies that having propositional justification is a luminous
condition – an idea that internalists typically cherish but struggle
to substantiate. The lesson for the epistemic logician, if any, is
that systems weaker than S4 should be devised that accommodate the
luminosity of ¬K¬Kp.
.
Hope to see you there!
The LIRa team
Dear all,
We will have our next LIRa session on Thursday, February 25th. Our speaker is Sven Rosenkranz. You can find the details of the talk below. We will use our recurring zoom link: https://uva-live.zoom.us/j/92907704256?pwd=anY3WkFmQVhLZGhjT2JXMlhjQVl1dz09 (Meeting ID: 929 0770 4256, Passcode: 036024).
Speaker: Sven Rosenkranz
Date and Time: Thursday, February 25th 2021, 16:30-18:00,
Amsterdam time.
Venue: online.
Title: To be in no position to know to be in no position to know:
methods, safety, and luminosity.
Abstract. To the best of my knowledge, no epistemic logician seriously
entertains the thought that negative introspection holds: clearly,
some cases of ignorance go unnoticed. By contrast, many epistemic
logicians seem unfazed by extant arguments against positive
introspection and continue working with logics at least as strong as
S4. There are arguments against positive introspection that rely on
the fact that knowledge implies belief. Such arguments can be defused
by recasting positive introspection in terms of the factive notion of
being in a position to know, where to be in a position to know p, one
needn’t believe p. Knowledge requires safe belief, and being in a
position to know accordingly requires being in a position to safely
believe. There are powerful safety- based arguments against positive
introspection, even in its revised formulation. I will not here
rehearse these arguments. Instead, I will argue for the claim that the
safety requirement poses no threat to ¬K¬Kp → K¬K¬Kp, where
‘K’ is short for ‘One is in a position to know that’. If
¬K¬Kp → K¬K¬Kp holds, ¬K¬Kp encodes a luminous condition. The
lead idea driving the argument is that methods for telling if ¬Kp
holds are best seen as being functionally dependent on the best
methods for telling if p holds, and as being lopsided in that they are
not, at the same time, methods for telling if Kp holds. Accordingly, I
must say a lot about methods first. Elsewhere I have argued that
¬K¬Kp is necessary and sufficient for one’s having propositional
justification for p. If ¬K¬Kp → K¬K¬Kp can be upheld, then this
implies that having propositional justification is a luminous
condition – an idea that internalists typically cherish but struggle
to substantiate. The lesson for the epistemic logician, if any, is
that systems weaker than S4 should be devised that accommodate the
luminosity of ¬K¬Kp.
.
Hope to see you there!
The LIRa team
Dear all,
We will have our next LIRa session tomorrow. We will use our recurring zoom link: https://uva-live.zoom.us/j/92907704256?pwd=anY3WkFmQVhLZGhjT2JXMlhjQVl1dz09 (Meeting ID: 929 0770 4256, Passcode: 036024). You can find the details below.
Speaker: Marija Slavkovik
Date and Time: Thursday, February 18th 2021, 16:30-18:00,
Amsterdam time.
Venue: online.
Title: Conflicts in machine ethics.
Abstract. Machine ethics is concerned with the problem of automating
moral reasoning. Specifically, given a set of options in a context
and some information on which moral values or obligations are to be
enforced, how to compute what is to be done by a machine? The obvious
challenge of machine ethics is often considered to be: what should a
machine never do? But that is the wrong question. A less obvious one
is: who should decide what a machine should never do? And for people
who understand collective decision-making: how? Moral values and moral
obligations tend to be ambiguously specified affairs. Furthermore,
people have different views when it comes to morality. All this
spells: conflicts. Conflicting views must be resolved as part of
automating moral reasoning. Drawing on past and ongoing work I discuss
three approaches to resolving conflicts when they are represented as
logic inconsistencies: argumentation, social choice and normative
conflict resolution.
Hope to see you there!
The LIRa team
Dear all,
We will have our next LIRa session on Thursday, February 18th. Our speaker is Marija Slavkovik. You can find the details of the talk below. We will use our recurring zoom link: https://uva-live.zoom.us/j/92907704256?pwd=anY3WkFmQVhLZGhjT2JXMlhjQVl1dz09 (Meeting ID: 929 0770 4256, Passcode: 036024).
Speaker: Marija Slavkovik
Date and Time: Thursday, February 18th 2021, 16:30-18:00,
Amsterdam time.
Venue: online.
Title: Conflicts in machine ethics.
Abstract. Machine ethics is concerned with the problem of automating
moral reasoning. Specifically, given a set of options in a context
and some information on which moral values or obligations are to be
enforced, how to compute what is to be done by a machine? The obvious
challenge of machine ethics is often considered to be: what should a
machine never do? But that is the wrong question. A less obvious one
is: who should decide what a machine should never do? And for people
who understand collective decision-making: how? Moral values and moral
obligations tend to be ambiguously specified affairs. Furthermore,
people have different views when it comes to morality. All this
spells: conflicts. Conflicting views must be resolved as part of
automating moral reasoning. Drawing on past and ongoing work I discuss
three approaches to resolving conflicts when they are represented as
logic inconsistencies: argumentation, social choice and normative
conflict resolution.
Hope to see you there!
The LIRa team
Dear all,
We will have our next LIRa session tomorrow. We will use our recurring zoom link: https://uva-live.zoom.us/j/92907704256?pwd=anY3WkFmQVhLZGhjT2JXMlhjQVl1dz09 (Meeting ID: 929 0770 4256, Passcode: 036024). You can find the details below.
Speaker: Elise Perrotin
Date and Time: Thursday, February 11th 2021, 16:30-18:00,
Amsterdam time.
Venue: online.
Title: Knowledge \"whether\" and belief \"about\" as a lightweight
alternative to Dynamic Epistemic Logic.
Abstract. Dynamic Epistemic Logic (DEL) is the standard logic used to
reason about agents\' knowledge and the actions that may affect both
their knowledge and the world around them. It is very expressive and
general, but this comes at a cost: in particular, DEL planning (that
is, determining the existence of a plan given a planning task) is
undecidable.
In this talk I will be presenting the logic EL-O, a lightweight logic
based on observation (or \"knowing whether\"), and how it can be applied
to planning. This logic is more expressive than other attempts at
simplifying DEL for planning, while retaining the same complexities as
classical propositional calculus and classical planning through
polynomial translations. I will also be precisely situation EL-O
w.r.t. to DEL and arguing that EL-O is not only interesting as a \"weak
DEL\", but has a number of advantages over DEL in terms of how
intuitive it makes designing models and actions. I will finish by
discussing our current attempts to extend this approach to beliefs,
going from \"believing that\" to \"having a belief about\" and considering
the possible ramifications of epistemic-doxastic situations.
Hope to see you there!
The LIRa team
Dear all,
We will have our next LIRa session on Thursday, February 11th. Our speaker is Elise Perrotin. You can find the details of the talk below. We will use our recurring zoom link: https://uva-live.zoom.us/j/92907704256?pwd=anY3WkFmQVhLZGhjT2JXMlhjQVl1dz09 (Meeting ID: 929 0770 4256, Passcode: 036024).
Speaker: Elise Perrotin
Date and Time: Thursday, February 11th 2021, 16:30-18:00,
Amsterdam time.
Venue: online.
Title: Knowledge \"whether\" and belief \"about\" as a lightweight
alternative to Dynamic Epistemic Logic.
Abstract. Dynamic Epistemic Logic (DEL) is the standard logic used to
reason about agents\' knowledge and the actions that may affect both
their knowledge and the world around them. It is very expressive and
general, but this comes at a cost: in particular, DEL planning (that
is, determining the existence of a plan given a planning task) is
undecidable.
In this talk I will be presenting the logic EL-O, a lightweight logic
based on observation (or \"knowing whether\"), and how it can be applied
to planning. This logic is more expressive than other attempts at
simplifying DEL for planning, while retaining the same complexities as
classical propositional calculus and classical planning through
polynomial translations. I will also be precisely situation EL-O
w.r.t. to DEL and arguing that EL-O is not only interesting as a \"weak
DEL\", but has a number of advantages over DEL in terms of how
intuitive it makes designing models and actions. I will finish by
discussing our current attempts to extend this approach to beliefs,
going from \"believing that\" to \"having a belief about\" and considering
the possible ramifications of epistemic-doxastic situations.
Hope to see you there!
The LIRa team
Dear all,
We will have our next LIRa session tomorrow. We will use our recurring zoom link: https://uva-live.zoom.us/j/92907704256?pwd=anY3WkFmQVhLZGhjT2JXMlhjQVl1dz09 (Meeting ID: 929 0770 4256, Passcode: 036024). You can find the details below.
Speaker: Sophia Knight
Date and Time: Thursday, February 4th 2021, 16:30-18:00,
Amsterdam time.
Venue: online.
Title: Reasoning about agents who may know other agents’ strategies
in Strategy Logic.
Abstract. In this talk I will discuss some new developments in
Strategy Logic with imperfect information. Strategy Logic is concerned
with agents\' strategic abilities in multi-agent systems, and unlike
ATL, treats strategies as first-class objects in the logic,
independent from the agents. Thus, in imperfect information settings,
Strategy Logic raises delicate issues, such as what agents know about
one another\'s strategies. I will describe a new version of Strategy
Logic that ensures that agents\' strategies are uniform, and allows a
formal description of their knowledge about each other\'s strategies.
Hope to see you there!
The LIRa team