One-year Researcher/Postdoc position (MSc or PhD) on Predictive Methods for Web Crawling at the faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, the Netherlands. The vacancy can be found through the following link:
https://www.utwente.nl/en/organisation/careers/!/2020-178/researcherpostdoc…
The closing date of the vacancy is 15 January 2021.
We are looking for a researcher (MSc or PhD) for a one-year position (researcher or post-doc) in the WebInsight project at the University of Twente, faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS), jointly between the chairs of Data Management and Biometrics (in the Computer Science department) and Mathematics of Operational Research (in the department of Applied Mathematics).
The candidates with MSc can apply for a researcher position. The candidates with PhD can apply for a post-doc position.
The WebInsight project develops, together with project partners, ultra-fast crawling of the Web for real-time market intelligence platforms. This is a practical challenge using big Web data. The work introduces predictive elements in Web crawling, such as ways to "score" websites and pages with scoring functions which reflect the patterns of change in these pages. We achieve this through the mining of Web data, and then building of predictive machine-learning models. The Web data includes both content data which characterizes the pages semantically, and structural data which characterizes link changes on the Web. Big data is available in the project by frequent crawling of the Web.
You will be working with Prof. Nelly Litvak (https://people.utwente.nl/n.litvak), Assist. Prof. Doina Bucur (https://people.utwente.nl/d.bucur), one other post-doctoral researcher, and a number of talented junior researchers from among our students in Computer Science or Applied Mathematics.
YOUR PROFILE
* You are highly motivated and an enthusiastic researcher.
* You have PhD or MSc degree in science, such as Computer Science, Data Science, Applied Mathematics or Statistics, Applied Physics, or other sciences related to the study of the Web (or similar large, structured systems) using ideas from data mining, network science, and machine learning.
* You have a strong interest in the topic of Web mining, modelling, and analysis.
* Programming skills (Python) and experience with basic data science tools and techniques (statistics, data mining) are essential. Practical skills in mining big data, and training and interpreting statistical models via machine learning are a plus.
* You have excellent collaboration skills.
* You have good command of English (level C1 or higher).
______________
Prof.dr. Nelly Litvak
Dept. of Applied Mathematics, University of Twente, and
Dept. of Mathematics and Computer Science Eindhoven University of Technology
https://www.utwente.nl/ewi/sor/about/staff/litvak/
Tel: +31(53)489-3388
We are looking for a candidate with strong algorithmic and programming skills to join our Algorithmics group at TU Delft as a PhD student under the supervision of Dr Mathijs de Weerdt and Dr Emir Demirović. The aim is to develop novel trustworthy, safe, and explainable machine learning algorithms using combinatorial optimisation techniques. Deadline for applying is until the New Year. Note that prior experience with machine learning is optional, but strong algorithmic and programming skills are essential. For more information, please see the ad: https://www.academictransfer.com/en/295058/phd-position-in-algorithmics/?fb…
--
Dr Emir Demirović
Assistant Professor
Algorithmics group<https://www.tudelft.nl/ewi/over-de-faculteit/afdelingen/software-technology…>, TU Delft
emirdemirovic.com<http://www.emirdemirovic.com>
All,
It’s a pleasure to invite you to our online event below, with as a special guest Jay McClelland, coeditor of the 1986 PDP books on connectionist (neural network / machine learning) models of intelligence, perception, language & the mind, providing a perspective on the open challenges for AI.
For more information see below and the website at
https://www.universiteitleiden.nl/en/sails/research/webinar-dec-2020-art-so…
Kind regards,
Peter van der Putten
Assistant professor, LIACS, Leiden University
The future of AI is human
The SAILS AI research initiative at Leiden University is inviting you to come and join us for an exciting event on the intersection of AI & art, science and society, on Tuesday December 15, 16.30-18.30 CET, on the SAILS YouTube channel<https://www.youtube.com/channel/UCzHbM9npjxFMy2LxlZnF5ag>. Free registration here<https://forms.gle/Ss8UwEzr1L6JuGds8> and info on the SAILS website<https://www.universiteitleiden.nl/en/sails/research/webinar-dec-2020-art-so…>.
We are not alone anymore. Artificial Intelligence is changing society, for better or for worse, and we will need to find new ways to relate to our artificial counterparts. Will our joint future be symbiotic, antagonistic or more one of fruitful collaboration? And what can we actually learn from the AI about what makes us human – perhaps even beyond intelligence? What are the grand challenges that are still out there, and do we even know how to begin to tackle them?
SAILS, the Leiden University wide research program on AI, has the pleasure to invite you to a livestream talk show on December 15, where we invite artists, scientists and designers to debate and imagine our future with AI, through a whirlwind of very current yet not so middle-of-the-road artworks and research projects.
Jay McClelland from Stanford University, whose books on neural networks launched the previous AI summer in the eighties, will conclude the event with his thoughts on the big pieces of puzzle still missing and reflect on our long-term future with AI.
Our guests:
* Vera van de Seyp<https://veravandeseyp.com/> on creative collaboration with AI in typography & design
* Petra Gemeinboeck<https://www.impossiblegeographies.net/> and Rob Saunders<https://www.robsaunders.net/> on creative robotics
* Eduard Fosch Villaronga<https://www.universiteitleiden.nl/en/staffmembers/eduard-fosch-villaronga#t…> on inclusive robotics & AI
* Suzan Verberne<http://liacs.leidenuniv.nl/~verbernes/> & Jasper Schelling<https://www.linkedin.com/in/jasperschelling/> on polarization and misinformation in media
* Jay McClelland<https://stanford.edu/~jlmcc/>, on what we have learned from AI & connectionist models of the mind, and the grand challenges & questions remaining
More detailed information on the speakers and the event is available on the SAILS website<https://www.universiteitleiden.nl/en/sails/research/webinar-dec-2020-art-so…>.
Organizers and moderators
* Tessa Verhoef<https://www.universiteitleiden.nl/en/staffmembers/tessa-verhoef#tab-1>: assistant professor, Leiden Institute for Advanced Computer Science, Leiden University
* Roy de Kleijn<https://roydekleijn.nl/>, assistant professor, Cognitive Psychology, Leiden University
* Peter van der Putten<http://liacs.leidenuniv.nl/~puttenpwhvander/>: assistant professor, Leiden Institute for Advanced Computer Science, Leiden University
* Mischa Hautvast<https://www.universiteitleiden.nl/en/staffmembers/mischa-hautvast#tab-1>: Program coordinator, SAILS
* Streaming production: Lesley van Hoek<https://lesleyvanhoek.nl/>, Graphical design: Vera van de Seyp<https://veravandeseyp.com/>
Dear colleagues,
The Korteweg-de Vries Institute for Mathematics at the University of
Amsterdam has a vacancy for a
*PhD position in Mathematical Machine Learning*
The candidate will be supervised by Tim van Erven to work on a topic to
be selected. Possible topics include:
* Adaptive Online Convex Optimization/Sequential Prediction
* Mathematical formalization and analysis of methods that generate
explanations for the decisions of black-box classifiers
We are looking for strong candidates with a background in Mathematical
Statistics/AI/CS or related areas.
For application instructions and further details, see
https://www.uva.nl/en/content/vacancies/2020/12/20-763-phd-position-in-in-m…
I would appreciate it if you could pass this along to potential candidates.
Best regards,
Tim
--
Tim van Erven <tim(a)timvanerven.nl>
www.timvanerven.nl
Dear all,
This is a reminder of tomorrow's CWI Machine Learning seminar, with a
*new and improved zoom URL*.
Speaker: Christian Hennig (University of Bologna)
Title: A spotlight on statistical model assumptions
Date: Friday 27 November, 15:00
Location:
https://cwi-nl.zoom.us/j/87252095652?pwd=aXI0K1VUdlNlbGlReEE3WGMyWXd6QT09
Please find the abstract below.
Hope to see you then.
Best wishes,
Wouter
Details:
https://portals.project.cwi.nl/ml-reading-group/events/a-spotlight-on-stati…
============
A spotlight on statistical model assumptions
Christian Hennig (University of Bologna)
Many statistics teachers tell their students something like "In order to
apply the t-test, we have to assume that the data are i.i.d. normally
distributed, and therefore these model assumptions need to be checked
before applying the t-test." This statement is highly problematic in
several respects. There is no good reason to believe that any real data
truly are drawn i.i.d. normally. Furthermore, quite relevant aspects of
these model assumptions cannot be checked. For example, I will show that
data generated from a normal distribution with a correlation of
$\rho\neq 0$ between any two observations cannot be distinguished from
i.i.d. normal data. On top of this, passing a model by a model checking
test will automatically invalidate it; much literature investigating the
performance of specific procedures that run model-based tests
conditionally on passing a model misspecification test comment very
critically on this practice.
Despite all these issues, I will defend interpreting and using
statistical models in a frequentist manner, by advocating an
understanding of models that never forgets that models are essentially
different from reality (and in this sense can never be "true"). Model
assumptions specify idealised conditions under which methods work well;
in reality they do not need to be fulfilled. However, situations in
which the data will mislead a method need to be distinguished from
situations in which a method does what it is expected to do. This
defines a more appropriate task for model checking. Conditions are
required for doing this job properly that some model checking currently
in use does not fulfill. For better "model checking" it will be helpful
to understand that this is not about "finding out whether the model
assumptions hold", but about something quite different.
(apologies for cross-posting)
We are pleased to announce that the Department of Intelligent Systems at
TU Delft, The Netherlands, can offer a 3-year postdoc position, as part
of the "Hybrid Intelligence" project, www.hybrid-intelligence-centre.nl.
Closing date: January 8th
--------------------------------------------------
Postdoc in (Meta-)Learning to Give Feedback in Interactive Learning (3
years)
How can an intelligent learn to interact? How can it learn via
interaction? For this project we are looking for a postdoc who wants to
push machine learning beyond traditional settings that assume a fixed
dataset. Specifically, in this project we will investigate interactive
learning settings in which two or more learners interact by giving each
other feedback to reach an outcome that is desirable from a system
designers perspective. The goal is to better understand how to structure
interactions to effectively progress to the desirable outcome state, and
to develop practical learning techniques and algorithms that exploit
these generated insights.
The postdoc will be based at TU Delft and co-supervised by Herke van
Hoof (University of Amsterdam) and myself. Given that the successful
candidate will have to work with 2 supervisors at different
institutions, we are looking for someone who can operate quite
independently.
Full requirements and application instructions:
https://www.academictransfer.com/en/295565/postdoc-meta-learning-to-give-fe…
More information:
For more information, please see:
https://www.fransoliehoek.net/wp/vacancies/
Informal inquiries are welcome and can be directed to myself:
Dr. Frans Oliehoek <f.a.oliehoek(a)tudelft.nl>.
--------------------------------------------------
I would be grateful if you could forward this message to suitable
candidates.
Best regards,
-Frans Oliehoek
Dear all,
I'm forwarding an announcement of an online talk at the UvA by my PhD
student Dirk van der Hoeven that might be of more general interest.
Best regards,
Tim
-------- Forwarded Message --------
Subject: Next SPIP talk: Dirk van der Hoeven
Date: Fri, 30 Oct 2020 19:00:47 +0100
From: SPIP Meetings <spip.meetings(a)gmail.com>
To: [...]
Dear All,
We are happy to invite you to our next SPIP talk on *Friday, 6th
November* from *16:00-17:00*. Our speaker is Dirk van der Hoeven and he
will talk about ‘_Exploiting the Surrogate Gap in Online Multiclass
Classification_/’//./
Dirk is a PhD student at Leiden University with Tim van Erven and he
will join Nicolò Cesa-Bianchis group as a postdoc in December.
_Zoom Details:_
_
_
Topic: SPIP - Dirk van der Hoeven
Time: Nov 6, 2020 04:00 PM Amsterdam, Berlin, Rome, Stockholm, Vienna
Join Zoom Meeting
https://uva-live.zoom.us/j/82915951912
<https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuva-live.…>
Meeting ID: 829 1595 1912
/
/
/
/
/_Abstract:_/
We present Gaptron, a randomized first-order algorithm for online
multiclass classification. In the full information setting we show
expected mistake bounds with respect to the logistic loss, hinge loss,
and the smooth hinge loss with constant regret, where the expectation is
with respect to the learner's randomness. In the bandit classification
setting we show that Gaptron is the first linear time algorithm with O(K
sqrt(T)) expected regret, where K is the number of classes.
Additionally, the expected mistake bound of Gaptron does not depend on
the dimension of the feature vector, contrary to previous algorithms
with O(K sqrt(T) ) regret in the bandit classification setting. We
present a new proof technique that exploits the gap between the zero-one
loss and surrogate losses rather than exploiting properties such as
exp-concavity or mixability, which are traditionally used to prove
logarithmic or constant regret bounds.
The Organizer-team
--
Tim van Erven <tim(a)timvanerven.nl>
www.timvanerven.nl
Dear all,
It is my pleasure to announce the following CWI Machine Learning seminar.
Speaker: Christian Hennig (University of Bologna)
Title: A spotlight on statistical model assumptions
Date: Friday 27 November, 15:00
Location:
https://us02web.zoom.us/j/82596062334?pwd=OTMwU2JmYUFRK0NLYW42OTExWDRyUT09
Please find the abstract below.
Hope to see you then.
Best wishes,
Wouter
Details:
https://portals.project.cwi.nl/ml-reading-group/events/a-spotlight-on-stati…
============
A spotlight on statistical model assumptions
Christian Hennig (University of Bologna)
Many statistics teachers tell their students something like "In order to
apply the t-test, we have to assume that the data are i.i.d. normally
distributed, and therefore these model assumptions need to be checked
before applying the t-test." This statement is highly problematic in
several respects. There is no good reason to believe that any real data
truly are drawn i.i.d. normally. Furthermore, quite relevant aspects of
these model assumptions cannot be checked. For example, I will show that
data generated from a normal distribution with a correlation of
$\rho\neq 0$ between any two observations cannot be distinguished from
i.i.d. normal data. On top of this, passing a model by a model checking
test will automatically invalidate it; much literature investigating the
performance of specific procedures that run model-based tests
conditionally on passing a model misspecification test comment very
critically on this practice.
Despite all these issues, I will defend interpreting and using
statistical models in a frequentist manner, by advocating an
understanding of models that never forgets that models are essentially
different from reality (and in this sense can never be "true"). Model
assumptions specify idealised conditions under which methods work well;
in reality they do not need to be fulfilled. However, situations in
which the data will mislead a method need to be distinguished from
situations in which a method does what it is expected to do. This
defines a more appropriate task for model checking. Conditions are
required for doing this job properly that some model checking currently
in use does not fulfill. For better "model checking" it will be helpful
to understand that this is not about "finding out whether the model
assumptions hold", but about something quite different.