UCL

Physics and Astronomy »

Centre for Doctoral Training in Data Intensive Science

01 Dec 2024

Seminars 2019-2021

14-01-2021 : Sudeep Das (Netflix)

Personalisation at Netflix: Making Stories Travel

At Netflix, we believe that great stories can come from anywhere in the world. We leverage deep human expertise together with machine learning techniques to enable a highly personalized experience for our members, that also encourages discovery and helps connect them with stories from the edges of the world. My talk will be a high level overview of the ML landscape that makes this magical experience happen.

Sudeep Das is a Senior Researcher in Machine Learning and Artificial Intelligence at Netflix, with expertise in natural language processing, recommender systems, and information retrieval. His main work lies at the heart of applied machine learning algorithms that power highly personalized experiences within consumer facing products. He is especially passionate about applications of deep neural networks and causal inference in the space of recommendations and search. He has about a decade of experience in applied research in personalization, from building music recommendation systems at Beats Music (later Apple Music), to leading the restaurant recommendation efforts at OpenTable Inc. (Priceline). Currently, at Netflix, he leads several projects in the algorithmic innovation space that constantly aim to improve the personalized experience of hundreds of millions of Netflix members across the globe. Sudeep holds a PhD in Astrophysics from Princeton University, where he led a research project that made possible the first detection of an elusive Astrophysical effect. Sudeep is extremely passionate about education and outreach, and has acted as a mentor and lecturer at machine learning workshops for the young students in the US (UC Berkeley) and in Mauritius, Rwanda and South Africa.

16-11-2020 : Prof. Ulrich Kerzel, IUBH University of Applied Sciences

Data science & AI in the "real world"

Data science and AI have seen a huge growth and many successes in the past decade. However, using these methods to create value in a commercial setting requires a diverse skill set, most of which is not taught at universities. In this talk, we'll explore what we mean by "data science and AI" in the private sector, how to set a project, why they often fail - and what graduates from engineering and physics find most challenging when they leave academia to join a company in this area.

19-10-2020 @ 12pm: Caterina La Porta and Stefano Zapperi (University of Milan)

Estimating individual susceptibility to Sars-CoV-2 in human subpopulations using artificial neural networks

The response to SARS-CoV-2 infection differs from person to person, with some patients developing more severe symptoms than others. The reasons for the observed differences in the severity of the Covid-19 disease are mostly still unknown. When a cell is infected by a virus, it exposes on its surface fragments of the viral proteins, or peptides, in association with HLA molecules. There are two classes of HLA molecules: class I and class II. HLA class I molecules are exposed on the surface of all the nucleated cells and trigger the activation of T cells which then destroy the infected cell. HLA molecules differ from individual to individual and so does their ability to bind viral fragments and expose them on the cell surface. In this talk we show that artificial neural networks (ANN) can be used to analyze the binding of SARS-CoV-2 peptides with HLA class I molecules. The ANN was first trained with experimentally known binding affinities of peptide-HLA pairs and then used to predict the binding of Sars-CoV-2 peptides. In this way, we identify two sets of HLA molecules present in specific human populations: the first set displays weak binding with SARS-Cov-2 peptides, while the second shows strong binding and T cell propensity.

01-07-2020 : Prof. Mirco Musolesi (UCL), a Turing Fellow at the Alan Turing Institute

Sensing and Modelling Human Behaviour and Emotional States using Mobile Devices

Zoom ID: 346 266 5332

Today's mobile phones are far from mere communication devices they were just ten years ago. Equipped with sophisticated sensors and advanced computing hardware, phones can be used to infer users' location, activity, social setting and more. As devices become increasingly intelligent, their capabilities evolve beyond inferring context to predicting it, and then reasoning and acting upon the predicted context. Information about users’ behaviour can also be gathered by means of wearables and IoT devices as well as by sensors embedded in the fabric of our cities. Inference is not only limited to physical context and activities, but in the recent years mobile phones have been increasingly used to infer users' emotional states. The applications of these techniques are several, from positive behavioural intervention to more natural and effective human-mobile device interaction. In this talk I will discuss the work of my lab in the area of mobile sensing for modelling and predicting human behaviour and emotional states. I will present our ongoing projects in the area of mobile systems for mood monitoring and mental health. In particular, I will show how mobile phones can be used to collect and analyse mobility patterns of individuals in order to quantitatively understand how mental health problems affect their daily routines and behaviour and how potential changes in mood can be automatically detected from sensor data in a passive way. Finally, I will discuss our research directions in the broader area of anticipatory mobile computing, outlining the open challenges and opportunities.

03-04-2020 : Ciarán Lee (Babylon Health)

Causal Inference in Healthcare

Causal reasoning is vital for effective reasoning in science and medicine. In medical diagnosis, for example, a doctor aims to explain a patient’s symptoms by determining the diseases causing them. This is because causal relations---unlike correlations---allow one to reason about the consequences of possible treatments. However, all previous approaches to machine-learning assisted diagnosis, including deep learning and model-based Bayesian approaches, learn by association and do not distinguish correlation from causation. I will show that these approaches systematically lead to incorrect diagnoses. I will outline a new diagnostic algorithm, based on counterfactual inference, which captures the causal aspect of diagnosis overlooked by previous approaches and overcomes these issues. I will additionally describe recent algorithms from my group which can discover causal relations from uncontrolled observational data and show how these can be applied to facilitate effective reasoning in medical settings such as deciding how to treat certain diseases.

13-02-2020 : Elena Cuoco (Pisa U)

Gravitational Wave science and Machine Learning

In the recent years, Machine and Deep learning techniques approaches have been introduced and tested for solving problems in astrophysics. In Gravitational Wave science many teams in the LIGO-Virgo collaboration have experimented, on simulated data or on real data of LIGO and Virgo interferometers, the power and capabilities of machine learning algorithms both for the detector noise characterization and gravitational wave astrophysical signals. The cost action CA17137 (g2net) aims to create an interdisciplinary network of Machine Learning and Gravitational Waves experts and to create collaborating teams to solve some of the problems of gravitational wave science using Machine Learning. In this seminar, I will show some of the results of the application of Machine Learning in the LIGO-Virgo collaboration and in the CA1737 cost action, dedicated to the analysis of data from gravitational wave experiments.

23-01-2020, : Howard Bowman (Kent U and Birmingham U)

Uses (and Abuses) of Machine Learning in Cognitive and Clinical Neuroscience

Machine learning has become extremely popular in cognitive neuroscience, and may be on the verge of impacting clinical applications of neuroimaging. Such methods offer the prospect to greatly increase the statistical and explanatory power available to the field. The plan for this talk is to illustrate how machine learning is being used in cognitive and clinical neuroscience, thereby highlighting its promise, as well as some potential pitfalls. I will illustrate a number of machine learning approaches, including, a time-oriented method, called temporal generalisation, a classic spatial analysis: multivariate lesion-deficit mapping in stroke and, if time allows, decoding methods, which are being applied to determining what is in a subjects mind at a particular point. While important findings are being made with these approaches, they are not always being applied completely robustly. Neuroimaging data sets are characterised by being very high dimensional, e.g. hundreds of thousands or millions of measurement units, such as, voxels, with (often non-stationary) smoothness in space and time. Additionally, one is typically trying to identify regions of this volume that underlie a particular classification or prediction (i.e. localisation of function is important). This means that machine learning methods need to be carefully embedded within statistical inference that controls for multiple comparisons (so called, family wise error correction), with consideration of the possibility of overfitting at the level of parameters and hyper-parameters. In particular, it may be that at the very point when the replication crisis in traditional experimental psychology is being addressed, machine learning is being applied in a fashion that inflates false positive (i.e. type-I error) rates [Skocik et al, 2016]. Additionally, interpretability of classification or prediction is critical for clinical uptake, limiting the applicability of some very powerful learning algorithms that effectively provide black-box solutions. Accordingly, there is also interest in methods that fit Bayesian graphs to data. Reference: L Skocik, M., Collins, J., Callahan-Flintoft, C., Bowman, H., & Wyble, B. (2016). I tried a bunch of things: the dangers of unexpected overfitting in classification. BioRxiv, 078816.

12-11-2019 : Niall Jeffrey (ENS Paris)

DeepMass: Deep learning dark matter map reconstructions from DES SV weak lensing data

I will present the first reconstruction of dark matter maps from weak lensing observational data using deep learning. We train a convolution neural network (CNN) with a Unet based architecture on over 3.6×105 simulated data realisations with non-Gaussian shape noise and with cosmological parameters varying over a broad prior distribution. We interpret our newly created DES SV map as an approximation of the posterior mean P(κ|γ) of the convergence given observed shear. Our DeepMass method is substantially more accurate than existing mass-mapping methods. With a validation set of 8000 simulated DES SV data realisations, compared to Wiener filtering with a fixed power spectrum, the DeepMass method improved the mean-square-error (MSE) by 11 per cent. With higher galaxy density in future weak lensing data unveiling more non-linear scales, it is likely that deep learning will be a leading approach for mass mapping with Euclid and LSST. (ArXiv: 1908.00543)

08-10-2019 : Peter Battaglia (DeepMind)

Learning Structured Models of Physics

This talk will describe a class of machine learning methods for reasoning about complex physical systems. The key insight is that many systems can be represented as graphs with nodes connected by edges. I'll present a series of studies which use graph neural networks--deep neural networks that approximate functions on graphs via learned message-passing-like operations--to predict the movement of bodies in particle systems, infer hidden physical properties, control simulated robotic systems, and build physical structures. These methods are not specific to physics, however, and I'll show how we and others have applied them to broader problem domains with rich underlying structure.

16-09-2019 : Tony Hey (Rutherford Appleton Laboratory, STFC)

Machine Learning and Big Scientific Data: What can AI do for the Facilities?

N/A

18-06-2019 : ASOS Group Project Particpants

Intent Modelling from Natural Language

We study the performance of customer intent classifiers to predict the most popular intent received through ASOS customer care, namely Where is my order?. We conduct extensive experiments to compare the accuracy of two popular classification models: logistic regression via N-grams that account for sequences in the data, and recurrent neural networks that perform the extraction of sequential patterns automatically. A Mann-Whitney U test indicated that F1 score on a representative sample of held out labelled messages was greater for linear N-grams classifiers than for recurrent neural networks classifiers (M1= 0.828, M2=0.815 ; U = 1,196, P = 1.46e−20), unless all neural layers including the word representation layer were trained jointly on the classification task (M1= 0.831, M2=0.828, U = 4,280, P = 8.24e−4). Overall our results indicate that using simple linear models in modern AI production systems is a judicious choice unless the necessity for higher accuracy significantly outweighs the cost of much longer training times.

11-06-2019 : Simon Arridge (Institute of Inverse Problems, UCL)

Combining learned and model based approaches for inverse problems

Deep Learning (DL) has become a pervasive approach in many machine learning tasks and in particular in image processing problems such as denoising, deblurring, inpainting and segmentation. Such problems can be classified as inverse problems where the forward operator is a mapping from image to image space. More generally, inverse problems (IPs) involve the inference of solutions from data obtained by measurements in a data-space with quite different properties to the image and result from a forward operator that may have spectral and range constraints. Inverse problems are typically ill-posed, exhibiting one or more of the characteristic difficulties : existance, uniqueness and/or instability, as described by Hadamard's original classification. Thus the application of DL within inverse problems is less well explored because it is not trivial to include Physics based knowledge of the forward operator into what is usually a purely data-driven framework. In addition some inverse problems are at a scale much larger than image or video processing applications and may not have access to sufficiently large training sets. Some approaches to this idea consist of i) fully learned (end-to-end) sytems mapping data directly into a solution, ii) postprocessing methods which perform a straightforward solution method such as back-projection (adjoint operation) followed by "de-artefacting" to enhance the solution by treating artefacts as noise with a particular structure, iii) iterative methods that unroll a variational solution and apply networks as a generalisation of a proximal operator, iv) learned regularisation where training sets are used to construct an equivalent prior distribution, followed by classical variational methods. Finally there are a class of methods in which the forward operator is learned, either by correcting a simple and computationally cheap operator by learning in the data domain, or by learning a physical model by interpreting the kernels of a feed-forward network as a generalisation of a PDE with the layers representing time-evolution.

In this talk I will present some of our work within this framework. I will give examples from cardiac magnetic resonance imaging (MRI), photoacoustic tomography (PAT) and non-linear image diffusion, amongst others applications.

Joint work with : Marta Betcke, Andreas Hauptmann, Felix Lucka.

04-06-2019 : Dr Hao Ni (Mathematics, UCL)

Learning to predict the effects of data streams using Logsig-RNN model

Supervised learning problems using streamed data (a path) as input are important due to various applications in computer vision, e.g. automatic character identification based on the pen trajectory (online handwritten character recognition) and gesture recognition in videos. Recurrent neural networks (RNN) are one kind of very popular neural networks, which have strength in supervised learning on the path space and have been a success in various computer vision applications like gesture recognition. Stochastic differential equations (SDEs) are the foundational building blocks in the derivatives pricing theory, an area of huge financial impact. Motivated by the numerical approximation theory of SDEs, we propose a novel and effective algorithm (Logsig-RNN model) to tackle this problem by combining the log signature feature set and RNN. The log-signature serves a top-down description of data stream to capture its effects economically, which further improves the performance of RNN significantly as a feature set. Compared with a RNN based on raw data alone, the proposed method achieves better accuracy, efficiency and robustness on various data sets (synthetic data generated by a SDE, UCI Pen-Digit data and gesture recognition ChaLearn2013 data). In ChaLearn 2013 data (skeleton data only), the proposed method achieves state-of-the-art classification accuracy

21-05-2019 : Shirley Ho (Flatiron Institute)

Machine Learning the Universe: Opening the Pandora Box

Scientists have always attempted to identify and document analytic laws that underlie physical phenomena in nature. The process of finding natural laws has always been a challenge that requires not only experimental data, but also theoretical intuition. Often times, these fundamental physical laws are derived from many years of hard work over many generation of scientists. Automated techniques for generating, collecting, and storing data have become increasingly precise and powerful, but automated discovery of natural laws in the form of analytical laws or mathematical symmetries have so far been elusive. Over the past few years, the application of deep learning to domain sciences – from biology to chemistry and physics is raising the exciting possibility of a data-driven approach to automated science, that makes laborious hand-coding of semantics and instructions that is still necessary in most disciplines seemingly irrelevant. The opaque nature of deep models, however, poses a major challenge. For instance, while several recent works have successfully designed deep models of physical phenomena, the models do not give any insight into the underlying physical laws. This requirement for interpretability across a variety of domains, has received diverse responses. In this talk, I will present our analysis which suggests a surprising alignment between the representation in the scientific model and the one learned by the deep model.

14-05-2019 : Sofia Olhede (Statistics, UCL)

Detecting spatial and point process associations

Point processes are challenging to analyse because of all spatial processes they contain the least information. Understanding their pattern then becomes an exercise in balancing the complexity of any model versus the tractability of evaluating any proposed likelihood function. Testing for associations is equally challenging, and if many tests need to be implemented, it becomes challenging to ballance different types of errors. I will discuss both likelihood approximations, and the intricacies of testing in this setting.

16-04-2019 : Sofia Vallecorsa (CERN openlab)

Generative Models in High Energy Physics

Theoretical and algorithmic advances, availability of data, and computing power are driving AI. Specifically, in the Deep Learning (DL) domain, these advances have opened the door to exceptional perspectives for application in the most diverse fields of science, business and society at large, and notably in High Energy Physics (HEP). The HEP community has a long tradition of using Machine Learning methods to solve tasks mostly related to efficient selection of interesting events against the overwhelming background produced at colliders. Today, many HEP experiments are working on integrating Deep Learning into their workflows for different applications: from data quality assurance, to real-time selection of interesting collision events, simulation and data analysis. In particular, Generative Models are being developed as fast alternatives to Monte Carlo based simulation. Generative models are among the most promising approaches to analyse and understand the amount of information next generation detectors will produce.

Training of such models has been made tractable thanks to algorithmic improvement and the advent of dedicated hardware, well adapted to tackle the highly-parallelizable task of training neural networks. High performance storage and computing (HPC) technologies are often required by these kind of projects, together with the availability of HPC multi-architecture frameworks (ranging from large multi-core systems to hardware accelerators like GPUs and FPGAs). Thanks to its unique role as a catalyst for collaborations between our community, leading ICT companies and other research organisations, CERN openlab is involved in a large set of Deep Learning and AI projects within the HEP community and beyond. This talk will present an overview of these activities.