We’re looking for a scientific software developer with Machine Learning (ML) expertise (MSc or PhD in ML).
Together with our current mathematics and Machine Learning developers, you will work on developing new functionality where state-of-the-art ML techniques are applied to problems in computational chemistry.
https://www.scm.com/news/job-opening-software-developer-machine-learning-in…
Dr. S.J.A. van Gisbergen
Directeur
Software for Chemistry & Materials B.V.
De Boelelaan 1083
1081 HV Amsterdam, The Netherlands
E-mail: vangisbergen(a)scm.com
http://www.scm.com
Dear Colleagues,
KU Leuven has a fixed-term (5 year) part-time (95%) academic vacancy in the area of Artificial Intelligence at its campus in Bruges. We are looking for internationally oriented candidates with an excellent research record, preferably focused on deploying AI in Business and Industrial Environments, and with demonstrable didactical skills. The expected start date is October 1, 2021. The successful applicant will be appointed in the Faculty of Engineering Technology. This Faculty is part of the Science, Engineering and Technology Group of KU Leuven. The position is interdepartmental, with a joint appointment in both the Department of Computer Science and the Department of Electrical Engineering (ESAT). The successful applicant will engage in collaborations with experts in data analysis and AI and academic users of AI technology in the broader region. Overall, he/she will be an active member of the KU Leuven Institute for Artificial Intelligence (Leuven.ai). This interdisciplinary institute combines the AI expertise of 60+ professors, 100s of researchers and 500 master students. As one of the 10 most innovative universities in the world, KU Leuven has been very successful in the creation of spin-off companies, illustrating the socio-economic relevance of its research. Many of these spin-off companies are technology leaders within their domain, and their products and services are renowned internationally.
More information see:
https://www.kuleuven.be/personeel/jobsite/jobs/56002799?hl=en&lang=en
Best regards,
Peter
—
prof. dr. Peter Karsmakers
[signature_1523137006]
[/var/folders/z8/hsb0l6j16bsg031x1w8zq02r0000gn/T/com.microsoft.Outlook/WebArchiveCopyPasteTempFiles/image][page1image8119648]
KU Leuven | Faculty of Engineering Technology
Geel Campus | Kleinhoefstraat 4, 2440 Geel (Belgium)
tel. +32 14 72 13 58 | www.iiw.kuleuven.be/geel<http://www.iiw.kuleuven.be/geel>
Departement of Computer Science | DTAI-ADVISE
ADVISE<http://www.kuleuven.be/advise> - DTAI<https://dtai.cs.kuleuven.be/>
[signature_756872295]
Dear Dutch machine learners,
This year's national NeurIPS debriefing will take place on:
Friday March 5, 2021
14h00-17h00
Via the Following Zoom link: https://uva-live.zoom.us/j/89293881695
PhD students and/or senior researchers will briefly present the paper they found the most interesting at NeurIPS 2020. If you are interested in NeurIPS, please consider being one of the presenters. There are still slots available.
The format is to have 3 hours of short talks (15 or 20 minutes, in English). It is not required to have attended NeurIPS, and you would usually not present your own paper. Talks can be informal, in a friendly atmosphere, so this is an ideal opportunity for PhD students to gain experience in giving presentations. The goal is to have presentations from multiple universities, and foster diologue between subdiciplines of the Dutch machine learning community.
If you are interested in giving a talk or if you have any questions, feel free to contact me at j [dot] j [dot] mayo [at] uva [dot] nl.
Preliminary list of speakers, and topics:
Mustafa Celikok, TU Delft - A Unifying View of Optimism in Episodic Reinforcement Learning by Gergely Neu, Ciara Pike-Burne
Alexander Mey, TU Delft - Towards Minimax Optimal Reinforcement Learning in Factored Markov Decision Processes by Yi Tian, Jian Qian, Suvrit Sra
Jack Mayo, University of Amsterdam - Optimal Algorithms for Stochastic Multi-Armed Bandits with Heavy Tailed Rewards by Kyungjae Lee, Hongjun Yang, Sungbin Lim, Songhwai Oh
For up to date information and an overview of past meetings, see www.timvanerven.nl/neurips-debriefing/<https://eur04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.timvan…>
Best regards,
Jack Mayo and Tim van Erven
TU Delft (Netherlands) offers two fully-funded PhD positions as part of the
Artificial Intelligence Lab for Biosciences.
The first PhD candidate will develop novel approaches for optimal batch
scheduling in plants. The second PhD candidate will develop novel reinforcement
learning algorithms for self-driving labs.
These PhD positions are part of the Artificial Intelligence Lab for Biosciences,
the AI4B.io Lab (www.ai4b.io), that TU Delft and Royal DSM recently established.
Additionally, 3 PhD positions on machine learning are available within the lab.
It is the first of its kind in Europe to apply artificial intelligence to full-
scale biomanufacturing, from microbial strain development to process
optimization and scheduling. The AI4B.io Lab is part of the Dutch National
Innovation Center for AI (ICAI) and starts with five synergistic research
projects. You will contribute to DSM’s challenges regarding developing bio-based
products and optimizing industrial-scale biobased processes. You will have
access to real industrial R&D and/or factory data, work in an industrial as well
as an academic environment, and have the opportunity to develop entrepreneurial
skills as the AI4B.io Lab collaborates with the Biotech Campus Delft and Planet
b.io; an excellent learning and research environment.
Further details and application forms can be found via the following links.
Smart plant scheduling:
https://www.tudelft.nl/over-tu-delft/werken-bij-tu-delft/vacatures/details?…
Reinforcement learning for a self-driving lab:
https://www.tudelft.nl/over-tu-delft/werken-bij-tu-delft/vacatures/details?…
Application deadline for full consideration: 15 March 2021
Informal enquiries: Mathijs de Weerdt (m.m.deweerdt(a)tudelft.nl) for the first
position, Matthijs Spaan (m.t.j.spaan(a)tudelft.nl) for the second position.
FYI. Please forward to possibly interested people.
-------- Forwarded Message --------
Subject: Craig Boutilier @ Challenges and Opportunities in Multiagent RL
Date: Thu, 4 Feb 2021 14:08:43 +0100
From: Frans Oliehoek <fa.oliehoek(a)gmail.com>
To: Frans Oliehoek - EWI <F.A.Oliehoek(a)tudelft.nl>
CC: c.amato(a)northeastern.edu, garnelo(a)google.com,
f.a.oliehoek(a)tudelft.nl, somidshafiei(a)google.com, karltuyls(a)google.com
Dear all,
After a fantastic inaugural presentation by Michael Bowling, we are
excited to announce the next speaker in our virtual seminar series on
the Challenges and Opportunities for Multiagent Reinforcement Learning
(COMARL):
Speaker: Craig Boutilier, Google Research
Title: Maximizing User Social Welfare in Recommender Ecosystems
(abstract and bio can be found below)
Date: Thursday February 11th, 2021
Time: 17:00 CET / 16:00 UTC / 08:00 PST
Location: via google meet or youtube
For detailed instructions on how to join, please see here:
https://sites.google.com/view/comarl-seminars/how-to-attend
For additional information, please see our:
*
Website <https://sites.google.com/view/comarl-seminars>(includes
schedule, instructions on how to join, etc.)
*
Twitter account (for speaker announcements and
more!):@ComarlSeminars <https://twitter.com/ComarlSeminars>
*
Google Groups (to receive invitations):
comarlseminars(a)googlegroups.com <mailto:comarlseminars@googlegroups.com>
We look forward to seeing you there!
Best regards from the organizers,
Chris Amato (Northeastern University),
Marta Garnelo (DeepMind),
Frans Oliehoek (TU Delft),
Shayegan Omidshafiei (DeepMind),
Karl Tuyls (DeepMind)
Speaker:
Craig Boutilier
Google Research,
Mountain View, CA, USA
Title:
Maximizing User Social Welfare in Recommender Ecosystems
Abstract:
An important goal for recommender systems is to make recommendations
that maximize some form of user utility over (ideally, extended periods
of) time. While reinforcement learning has started to find limited
application in recommendation settings, for the most part, practical
recommender systems remain "myopic" (i.e., focused on immediate user
responses). Moreover, they are "local" in the sense that they rarely
consider the impact that a recommendation made to one user may have on
the ability to serve other users. These latter "ecosystem effects" play
a critical role in optimizing long-term user utility. In this talk, I
describe some recent work we have been doing to optimize user utility
and social welfare using reinforcement learning and equilibrium modeling
of the recommender ecosystem; draw connections between these models and
notions such as fairness and incentive design; and outline some future
challenges for the community.
Bio:
Craig Boutilier is a Principal Scientist at Google. He received his
Ph.D. in Computer Science from U. Toronto (1992), and has held positions
at U. British Columbia and U. Toronto (where he served as Chair of the
Dept. of Computer Science). He co-founded Granata Decision Systems,
served as a technical advisor for CombineNet, Inc., and has held
consulting/visiting professor appointments at Stanford, Brown, CMU and
Paris-Dauphine.
Boutilier's current research focuses on various aspects of decision
making under uncertainty, including: recommender systems; user modeling;
MDPs, reinforcement learning and bandits; preference modeling and
elicitation; mechanism design, game theory and multi-agent decision
processes; and related areas. Past research has also dealt with:
knowledge representation, belief revision, default reasoning and modal
logic; probabilistic reasoning and graphical models; multi-agent
systems; and social choice.
Boutilier served as Program Chair for IJCAI-09 and UAI-2000, and as
Editor-in-Chief of the Journal of AI Research (JAIR). He is a Fellow of
the Royal Society of Canada (FRSC), the Association for Computing
Machinery (ACM) and the Association for the Advancement of Artificial
Intelligence (AAAI). He also received the 2018 ACM/SIGAI Autonomous
Agents Research Award.
TU Delft (Netherlands) offers two fully-funded PhD positions as part of
the EU FET Open project Epistemic AI.
The first PhD candidate will develop novel approaches for combinatorial
optimisation under epistemic uncertainty. The second PhD candidate will
develop novel reinforcement learning algorithms that aim for robust and
safe behaviour in partially-known environments.
The project goal is to create a new paradigm for next-generation AI
providing worst-case guarantees on its predictions thanks to a proper
modelling of real-world uncertainties.
Further details and application form:
https://www.tudelft.nl/over-tu-delft/werken-bij-tu-delft/vacatures/details?…
Application deadline for full consideration: 28 February 2021
Informal enquiries: Neil Yorke-Smith (n.yorke-smith(a)tudelft.nl) for the
first position, Matthijs Spaan (m.t.j.spaan(a)tudelft.nl) for the second
position.
Topic: 'Interactive Robot Learning', see job description below
Location: TU Delft, The Netherlands
Duration: 24 months
Application deadline: March 15th 2021
Starting date: at the latest September 2021
Salary: EUR 3.491 - EUR 4.402
For more details see https://www.tudelft.nl/over-tu-delft/werken-bij-tu-delft/vacatures/details?…
Applications need to be submitted via the website above. For more information about this vacancy, please contact Jens Kober, Associate Professor, email: mailto:J.Kober@tudelft.nl
*Job description*
Programming and re-programming robots is extremely time-consuming and expensive, which presents a major bottleneck for new industrial, agricultural, care, and household robot applications. The goal of this project is to enable robots to learn how to perform manipulation tasks from few human demonstrations, based on novel interactive machine learning techniques. Robot learning will no longer rely on initial demonstrations only, but it will effectively use additional user feedback to continuously optimize the task performance. It will enable the user to directly perceive and correct undesirable behavior and to quickly guide the robot toward the target behavior. You will explore one or several aspects of interactive robot learning: learning force-interaction skills with user inputs, requesting additional advice, interactive imitation and reinforcement learning for sequences, interactive inverse reinforcement learning, and evaluating how humans prefer to teach robots. You will evaluate the developed approaches with generic real-world robotic force-interaction tasks related to handling and (dis)assembly. You will demonstrate the potential of the newly developed teaching framework with challenging bi-manual tasks and a final user study evaluating how well novice human operators can teach novel tasks to a robot.
The Postdoc positions are in the context of the project "Teaching Robots Interactively" (TERI), funded by the European Research Council as ERC Starting Grant.
*Requirements*
You have a PhD degree in systems and control, robotics, applied mathematics, artificial intelligence, machine learning, or a related subject. You must have strong analytical skills and must be able to work at the intersection of several research domains. Experience with real robot applications, bi-manual robots, interactive learning, and/or user studies is a plus.
You must have demonstrated ability to conduct high-quality research according to international standards, as demonstrated by publications in international, high-quality journals. A very good command of the English language is required, as well as excellent communication skills.