Dear all,
It is my pleasure to announce the following CWI Machine Learning
seminar, now with the correct speaker.
Speaker: Reuben Adams (UCL)
Title: Could we lose control of AI? Exploring the arguments of an old
and reignited debate
Date: Friday 16 February, 11:00
Location: CWI L017
Please find the abstract below.
Hope to see you then.
Best wishes,
Wouter
Details:
https://portals.project.cwi.nl/ml-reading-group/events/could-we-lose-contro…
============
Could we lose control of AI? Exploring the arguments of an old and
reignited debate
Reuben Adams (UCL)
Since the beginning of AI, some researchers have warned that we could
lose control of sufficiently advanced AI systems. In a 1951 lecture,
Alan Turing proposed that “once the machine thinking method had started,
it would not take long to outstrip our feeble powers” concluding that
“[a]t some stage therefore we should have to expect the machines to take
control.” The recent acceleration of AI progress has thrust this
question forward, revealing enormous disagreements in the field. I will
outline the main arguments on both sides, taking the heat out of the
debate by showing where the interesting cruxes lie. How much does the
argument for losing control depend on rapid jumps in AI capabilities
(spoiler: a bit), machines becoming sentient (spoiler: not at all), the
orthogonality of intelligence and values, or believing in an abstract
notion of “general intelligence”? Can we RLHF our values into AIs? Will
AIs be power-seeking by default? What would it take for AIs to become
deceptive, and for us to fail to realise that? I will outline the state
of the debate on these questions, concluding with my own views.
Dear all,
It is my pleasure to announce the following CWI Machine Learning seminar.
Speaker: Rémi Bardenet (CNRS & CRIStAL, Univ. Lille)
Title: Could we lose control of AI? Exploring the arguments of an old
and reignited debate
Date: Friday 16 February, 11:00
Location: CWI L017
Please find the abstract below.
Hope to see you then.
Best wishes,
Wouter
Details:
https://portals.project.cwi.nl/ml-reading-group/events/could-we-lose-contro…
============
Could we lose control of AI? Exploring the arguments of an old and
reignited debate
Reuben Adams (UCL)
Since the beginning of AI, some researchers have warned that we could
lose control of sufficiently advanced AI systems. In a 1951 lecture,
Alan Turing proposed that “once the machine thinking method had started,
it would not take long to outstrip our feeble powers” concluding that
“[a]t some stage therefore we should have to expect the machines to take
control.” The recent acceleration of AI progress has thrust this
question forward, revealing enormous disagreements in the field. I will
outline the main arguments on both sides, taking the heat out of the
debate by showing where the interesting cruxes lie. How much does the
argument for losing control depend on rapid jumps in AI capabilities
(spoiler: a bit), machines becoming sentient (spoiler: not at all), the
orthogonality of intelligence and values, or believing in an abstract
notion of “general intelligence”? Can we RLHF our values into AIs? Will
AIs be power-seeking by default? What would it take for AIs to become
deceptive, and for us to fail to realise that? I will outline the state
of the debate on these questions, concluding with my own views.
Dear all,
Anyone going to the ALT conference this year might also be interested in
the joint symposium between ALT and ITA.
Vidya Muthukumar asked me to forward the announcement below.
Best,
Tim
--------------------------------------------------------------------------------------------------------------------------------------
*Announcement: ITALT symposium (joint between ITA and ALT), Saturday,
Feb 24, San Diego*
Vidya Muthukumar, together with Alon Orlitsky, Daniel Hsu and Claire
Vernade, is chairing a *1-day symposium between the Information Theory
and Applications (ITA) workshop <https://ita.ucsd.edu/workshop/> and the
Algorithmic Learning Theory (ALT) conference
<http://algorithmiclearningtheory.org/alt2024/>*, which we call
"ITALT". The symposium will be held on *Saturday, Feb 24, at the Bahia
Resort in San Diego *(same location as ITA).
This symposium is interdisciplinary and will bring members of the
information theory and learning theory communities together. The
symposium features invited *tutorials on optimal control/reinforcement
learning and large language models *by *Ankur Moitra and Yuanzhi Li*, an
*open problems session* and an *all-women professional development panel
and mentorship roundtables* organized by Women in Machine
Learning-Theory <https://www.wiml-t.org/> and the Learning Theory
Alliance <https://let-all.com/>. Please see the schedule on our
symposium website
<http://algorithmiclearningtheory.org/alt2024/ita-alt-italt/> for more
details (exact times are subject to minor change).
We will have a registration link for the symposium available early next
week and will write back then, but wanted to announce right away so that
you have the information about this symposium on your calendars and can
plan travel appropriately if you're interested in attending.
We also encourage you to register for at least one of ITA or ALT if you
have not already!
Best,
Vidya Muthukumar, Alon Orlitsky, Daniel Hsu, Claire Vernade
Dear all,
(please feel free to forward)
We have a talk by Prof Herbert Jaeger (https://www.ai.rug.nl/minds/herbert/) from the Rijksuniversiteit Groningen:
Date: 25 Jan 2024
Time: 11 AM to 12 PM
Location: L016, CWI, Amsterdam
Title: "It has taken 2350 years to understand symbolic-logical computing.
Next step: understanding brains and stuff"
Abstract: For digital computing we possess a formal theory foundation
that deeply roots in Western philosophical history, is mathematically
transparent, has been worked out and stabilized and codified into a
standart textbook format, and obviously has changed the world through
digital computers. For information processing in neuromorphic
microchips, or in other new and to-be-found hardware substrates based on
unconventional physical effects, or in biological brains or other
natural systems, we do not have anything like a unifying formal theory
foundation. But we need it, and it should not only be academically
acceptable but really practically useful. In my talk I will draw a quick
overall picture of this situation, and then present my own approach
toward formulating such a general formal theory for information
processing in non-digital, non-symbolic dynamical systems.
Best,
Aditya.
The following is a new meeting request:
Subject: [ml_ned] Talk by Herbert Jaeger (UniGroningen) 25 Jan 11-12 @ CWI, Amsterdam
Organizer: "Gao Peng" <Gao.Peng(a)cwi.nl>
Time: Thursday, 25 January 2024 11:00 AM to 11:30 AM
Location: 016
Invitees: machine-learning-nederland(a)list.uva.nl; Aditya.Gilra(a)cwi.nl; machine-learning-nederland(a)list.uva.nl
*~*~*~*~*~*~*~*~*~*
From: Aditya <machine-learning-nederland(a)list.uva.nl>
To: ml_ned <machine-learning-nederland(a)list.uva.nl>
Date: Monday, 15 January 2024 12:06 PM CET
Subject: [ml_ned] Talk by Herbert Jaeger (UniGroningen) 25 Jan 11-12 @ CWI, Amsterdam
Dear all,
(please feel free to forward)
We have a talk by Prof Herbert Jaeger (https://www.ai.rug.nl/minds/herbert/) from the Rijksuniversiteit Groningen:
Date: 25 Jan 2024
Time: 11 AM to 12 PM
Location: L016, CWI, Amsterdam
Title: "It has taken 2350 years to understand symbolic-logical computing.
Next step: understanding brains and stuff"
Abstract: For digital computing we possess a formal theory foundation
that deeply roots in Western philosophical history, is mathematically
transparent, has been worked out and stabilized and codified into a
standart textbook format, and obviously has changed the world through
digital computers. For information processing in neuromorphic
microchips, or in other new and to-be-found hardware substrates based on
unconventional physical effects, or in biological brains or other
natural systems, we do not have anything like a unifying formal theory
foundation. But we need it, and it should not only be academically
acceptable but really practically useful. In my talk I will draw a quick
overall picture of this situation, and then present my own approach
toward formulating such a general formal theory for information
processing in non-digital, non-symbolic dynamical systems.
Best,
Aditya.
_______________________________________________
Machine-learning-nederland mailing list
Machine-learning-nederland(a)list.uva.nl
https://list.uva.nl/mailman/listinfo/machine-learning-nederland
Dear all,
Each year, the AI and Mathematics network (AIM<https://aimath.nl/>) organizes a track at the conference ICT.OPEN<https://ictopen.nl/>. The next edition will take place on April 10 and 11, 2024, in the Jaarbeurs in Utrecht. The goal of the track is to bring together mathematicians and computer scientists who are based in the Netherlands and are working on the foundations of AI. A detailed track description can be found here<https://ictopen.nl/track-fundamental-sciences-in-ai>.
We would like to invite you to submit an abstract to present your work within the track. The details of the call can be found here<https://ictopen.nl/call-for-abstracts-nwo-ictopen2024>. The submission deadline is 16 January 2024.
Best wishes,
Sjoerd Dirksen, Tim van Erven, Silke Glas, Mihaela Mitici
Dear All.
It is my pleasure to announce that a fraudulent and blatantly
plagiarized publication from July 2023, (allegedly) about the
training of layered neural networks in time-dependent envrionments,
has been retracted by the Editorial Office of *MDPI Applied Sciences*,
at last. The retraction process took very long and communication
was quite unpleasant.
*Please see *
*https://www.mdpi.com/2076-3417/14/2/476*
<https://www.mdpi.com/2076-3417/14/2/476>
*for details and links to the plagiarized original pieces of work. *
Note that the term "overlap" in the retraction notice is way too
polite in view of the brazen "cut-and-paste" approach of
the "authors". They literally copy/pasted essentially all figures and
equations from other sources. The text consists of non-sensical
gibberish and contains no original results whatsoever. Four referees
apparently failed to note the gobbledygook and recognize the
plagiarism.
Best wishes,
Michael Biehl
--
---------------------------------------------------
Prof. Dr. Michael Biehl
Bernoulli Institute for Mathematics,
Computer Science & Artificial Intelligence
P.O. Box 407, 9700 AK Groningen, NL
https://www.cs.rug.nl/~biehl m.biehl(a)rug.nl
Dear colleagues,
Our next BeNeRL Reinforcement Learning Seminar (Jan 11) is coming:
Speaker: Chris Lu (https://chrislu.page<https://chrislu.page/>), PhD student at the University of Oxford.
Title: Accelerating RL research with PureJaxRL
Date: January 11, 16.00-17.00 (CET)
Please find full details about the talk below this email and on the website of the seminar series: https://www.benerl.org/seminar-series
The goal of the online BeNeRL seminar series is to invite RL researchers (mostly advanced PhD or early postgraduate) to share their work. In addition, we invite the speakers to briefly share their experience with large-scale deep RL experiments, and their style/approach to get these to work.
We would be very glad if you forward this invitation within your group and to other colleagues that would be interested (also outside the BeNeRL region). Hope to see you on January 11!
Kind regards,
Zhao Yang & Thomas Moerland
Leiden University
——————————————————————
Upcoming talk:
Date: January 11, 16.00-17.00 (CET)
Speaker: Chris Lu (https://chrislu.page<https://chrislu.page/>)
Title: Accelerating RL research with PureJaxRL
Zoom: https://universiteitleiden.zoom.us/j/65411016557?pwd=MzlqcVhzVzUyZlJKTEE0Nk…
Abstract: Recent advancements in JAX have enabled researchers to train RL agents entirely on the accelerator end-to-end, resulting in runtime speedups of over 4000x. Such speedups have the potential to fundamentally change the way we do RL, allowing researchers to efficiently run hundreds of seeds simultaneously, perform rapid hyperparameter tuning, and perform long-horizon meta-evolution for RL. Furthermore, this vastly lowers the computational barrier of entry to Deep RL research, allowing academic labs to perform research using trillions of frames (closing the gap with industry research labs) and enabling independent researchers to get orders of magnitude more mileage out of a single GPU.
Bio: Chris Lu is a third-year DPhil student at the University of Oxford, where he is advised by Professor Jakob Foerster at FLAIR. His work focuses on applying evolution-inspired techniques to meta-learning and multi-agent reinforcement learning. In the summer of 2022 he interned at DeepMind as a research scientist. Previously, he worked as a researcher at Covariant.ai.
Forwarding on behalf of Bart Verheij, who has multiple interesting PhD
positions available in Groningen:
---------- Forwarded message ---------
From: *Bart Verheij* <bart.verheij(a)rug.nl>
Date: Tue, Jan 2, 2024 at 11:31 AM
Subject: PhD position on aligning learning and reasoning at the
University of Groningen
To:
Are you interested in formal AI methods for the alignment of learning
and reasoning?
Then we have a PhD position in Artificial Intelligence and Mathematics
<https://www.rug.nl/about-ug/work-with-us/job-opportunities/?details=00347-0…>
for you!
The context is the implementation and design of cognitive devices.
We also have a PhD position in Physics and Statistics
<https://www.rug.nl/about-ug/work-with-us/job-opportunities/?details=00347-0…>
on the statistical analysis of innovative hardware.
We (Marco Grzegorczyk, Beatriz Noheda, Bart Verheij) are looking for
people with a solid background in AI, mathematics and/or physics.
Please apply by January 15, 2024. We look forward to your applications!