PhD Projects and Funding Opportunities

The combination of history, tradition and current status of Computing Science research in Glasgow offers the best possible training for future computer scientists.
Faizuddin Mohd Noor, PhD graduate

Please browse the projects listed below, each of which has been proposed by a member of staff. In many cases, these proposals can be modified - please contact the academic responsible to discuss. We also accept self-defined projects. Competitive scholarships are available for both UK and international students. For general information please email socs-pgr-enquiries@glasgow.ac.uk. Please note that offers of admission are made independent of scholarship decisions.  If your study is dependent upon funding, be aware that offers of scholarship are confirmed independently.  If you applied for a specific scholarship or project, you may wish to use the enquiries address above to confirm your status if you have not received a communication from us.  

Details of how to apply can be found on the Prospective students page.

Our research cuts across the intersection of theoretical and applied computing.  For more information, please click here.

Funded projects

A list of our current scholarships will be displayed below when available - also see the Graduate School webpage for related scholarships.

TransiT

The University of Glasgow is a core member of TransiT (https://transit.ac.uk/), a national research centre featuring 7 Universities and with significant UKRI and stakeholder co-investment. TransiT explores Digital Twins for transport decarbonisation and mode integration, and is responsible for delivering new knowledge, technical capabilities and policy advisement relating to UK priorities in this area.

We are currently recruiting Ph.D. students, covering fees (at Home student level) and a stipend available for 42 months. Some proposed topics are below. We are also open to applications for topic within similar areas.

Please get in touch with the proposed supervisor for more details. The lead supervisor is given, but you will be assigned additional supervisors.

A short research proposal is required at application stage and should align with the chosen topic.

 

Evolving Federated Digital Twins with Symbolic Ontologies

Enquiries to: Dr B Archibald

This project will be jointly supervised by members of the TransiT team and involve cross cutting themes of ontologies, security, and formal methods.

Digital Twins are data-driven virtual representations (models) of real-world systems. A core challenge is in specifying which elements of the real-world system are important, and some amount of abstraction is necessary. Ontologies describe entities of interest, and their relations, and is often described using a knowledge graph, e.g relating cars and roads. In many existing approaches this graph is static and specified manually. There is scope for a different approach: where we try and learn parts of the ontology dynamically based on data (perhaps with a human-in-the-loop) and attempt to automatically update the graph in response to system dynamics. For example, determining that there is a new type of vehicle operating in a particular area. Such a scenario occurred when electric scooters started to be deployed. We are interested in both transport ontologies, and the wider system including energy, communications and security.

Research Area: To ensure correctness of the knowledge graph, we propose modelling it symbolically using graph transformation which allows strong reasoning over the structure of the graph. For example, we can determine that no invalid relation is suggested, and use this to optimise the learning routines. Using similar formal mechanisms, a further research goal would be to generate synthetic data from proposed ontologies and discover how this might affect real-world systems, e.g. we might propose a completely new transport mode and ask how it will interact with existing modes.

A successful candidate will have scope to help shape the research area and will work alongside and contribute to a large National Research Hub.

 ________________________________________________________________________________________________

Reconfigurable Federated Digital Twins for Resilience

Enquiries to: Dr B Archibald

This project will be jointly supervised by members of the TransiT team and involve cross cutting themes of resilience monitoring for networked systems and formal methods.

Digital Twins are data-driven virtual representations (models) of real-world systems that can be used for monitoring, auditing, prediction, and control. Once the digital twin is created, there is limited experimentation with different designs due to prohibitive costs. One area where this is an issue is in security/resiliance: we need structured and automated ways for twins to change to improve their resilience to unknown events and security risks (cyberattacks) over time over time.

Research Area: In this PhD project, the successful candidate will research the monitoring and control instrumentation needed to capture the changing behaviour of networked digital (and physical) twins, and use this to allow for dynamic resilience modelling and automated synthesis of digital twin variants to improve resilience. As we are interested in critical infrastructure (transport), we will consider diverse dataflows and cyberattack vectors as well as the normal behaviour profiling of the digital twin to guarantee the resilient operation of a twin under evolution. For example, we might wish to prove (formally) that resilience is not weakened during evolution (or to quantify potential weakening, e.g. 1% more likely to have service disruption). We believe such resilience guarantees are possible by employing formal models of the twins. These models, trained and refined with both real and generated twin data, will describe the resilience properties of the twins and subsequently instrument the cyber-physical system with routines that can ensure acceptable levels of service are maintained even during adversarial events.

The successful candidate will have scope to help shape the research area and will work alongside and contribute to a large National Research Hub.

 

Nokia Bell Labs Women's Scholarship

Applications are invited from female UK citizens for the Nokia Bell Labs Women’s Scholarship.  This scholarship is part of a world-wide initiative across Nokia Bell Labs locations to increase recruitment of women into further study of STEM subjects, with the longer term goal of increasing the proportion of women employed by the company to 30%. This scholarship is for one excellent candidate to pursue a PhD in Computing Science at the University of Glasgow in a topic of her choosing, subject to appropriate supervision arrangements within the School of Computing Science.

Terms and Conditions

The recipient’s tuition fees will be paid by the scholarship for 3.5 years of full-time study (part-time pro-rata for 6-8 years).  The student will also receive a stipend in line with Research Council rates.  The name and university email address of the recipient will be shared with Nokia Bell Labs on award, to enable communication between the recipient and the company.  The recipient will have informal opportunities to meet with Nokia Bell Labs employees to discuss their research, and receive information about Nokia Bell Labs regarding job and other opportunities with the company.  The scholarship does not represent an offer or guarantee of a job with Nokia Bell Labs in any way. The recipient of the scholarship will own any intellectual property that they generate during the course of their studies.

 

To apply, you should follow the procedure outlined at https://www.gla.ac.uk/schools/computing/postgraduateresearch/prospectivestudents/

Other Funded PhD projects

SEMI-SUPERVISED MEDICAL IMAGE SEGMENTATION AND REGISTRATION - Dr. Fani Deligianni   

Research fields: Semi-supervised learning; Medical image analysis; Deep learning

Description: Medical image segmentation and registration are crucial tasks in healthcare diagnostics and treatment planning. However, these tasks often require large amounts of labelled data, which can be expensive and time-consuming to obtain. To address this challenge, this research focuses on developing semi-supervised learning techniques for medical image segmentation and registration that can effectively leverage both labelled and unlabelled data. By combining the strengths of supervised and unsupervised learning, we aim to reduce the dependence on large, annotated datasets while maintaining high accuracy.

Enrolment & opportunity: The successful candidate will enrol as a PhD student at the School of Computing Science under the supervision of Dr. Fani Deligianni and will join the Biomedical AI and Imaging Informatics Group. Our lab explores several research problems in semi-supervised learning, medical image analysis and deep learning. The candidate will have the opportunity to work on cutting-edge research at the intersection of machine learning and medical imaging with potential real-world applications in healthcare.

Skills: The ideal candidate will have a strong background in computer science, with a focus on machine learning and image processing. A solid foundation in mathematics and/or statistics is important. Special areas of interest include deep learning architectures for image analysis, semi-supervised learning techniques, and optimization methods. A good understanding of medical imaging modalities (e.g., MRI, CT, ultrasound) will be a considerable plus. Strong programming skills (Python, PyTorch/TensorFlow) are required. Experience in image processing libraries (e.g., OpenCV, SimpleITK) is desirable. Good communication skills in English and teamwork capacity are essential for collaborating with interdisciplinary teams and disseminate research findings.

Funding opportunities and scholarships

The School of Computing Science is offering studentships to support PhD research for students starting in October 2025 or shortly thereafter.  The deadline for applications is 31 January 2025. More information is available here.

If you wish to be considered for a School of Computing Science studentship then please select "Univ of Glasgow Scholarships" as the funding source when completing your application.

You should refer to our Centre for Doctoral Training (CDT) in Socially Intelligent Artifical Agents (SOCIAL) for information on funded studentships in Computing Science and Psychology with Integrated Study.

Excellence Bursaries

We are delighted to offer several international and home excellence bursaries for postgraduate research students.   

These bursaries are designed to attract talented students to join our world leading research sections, working on new and challenging problems in computer science. These are fee scholarships, hence the value of the bursary will be reduced from the tuition fee payable to the University.  Deadline: 31 January 2025.

More information is available at the Excellence Bursaries webpage.

Minerva Scholarships

The School of Computing Science is offering two Minerva Scholarships.  These are 5-year scholarships to support part-time PhD students spending two thirds of their time on research and one third of their time on teaching.  They support tuition fees at the home level and a remuneration package of at least £20,188 per year for living expenses.   

Deadline: 31 January 2025.

More information is available at the Minerva Scholarship webpage.

To apply, you should follow the procedure outlined at https://www.gla.ac.uk/schools/computing/postgraduateresearch/prospectivestudents/ candidates will be interviewed for this position and must include a teaching statement indicating why they wish to teach, why they are suitable for this opportunity, and previous teaching experience  if applicable.  In the online application system, you should enter “Minerva Scholarship” in the box that requests information about funding arrangements.

 

College Scholarships

The School of Computing Science is offering several studentships to support tuition fees at the home level and living expenses at at the recommended UKRI rate with an annual stipend (£18,622 [tbc] for session 2024-25).  For these studentships, supervisors who are early career members of staff will receive priority.  Deadline: 31 January 2025.

How to apply:

Apply online, you must select “Univ of Glasgow Scholarship” as the funding source when submitting your application in order to be considered for a scholarship.

Glasgow-Singapore Scholarships

The School of Computing Science is offering two Glasgow-Singapore Scholarships.  PhD applications for two Glasgow-Singapore Joint PhD Scholarships, each scholarship will support research students who wish to undertake 18-24 months in Singapore for a collaborative project.


The total duration of each Scholarship is 3.5 years. It is expected that each successful applicant will start their PhD degree in Glasgow and subsequently relocate to Singapore for at least 18 months to work on their PhD research with a collaborator from an eligible Singapore University.

Deadline: 31 January 2025.

More information is available at the Glasgow-Singapore Scholarship webpage.

 

EPSRC Doctoral Training Account

 

The School holds a Research Council Doctoral Training Account which is used to support PhD studentships for UK residents (fees and stipend) - if you are an EU student you must be ordinarily resident in the UK for 3 years prior to the start of an award and have a residence in the UK, together with settled or pre-settled status.  To be eligible for an award, candidates must hold at least an upper second class honors degree from a UK university, or an overseas equivalent. These scholarships are highly competitive.

Other School funds may be used to fund both home/EU and overseas students for fees and stipend.

For more details, contact socs-pgr-enquiries@glasgow.ac.uk or consult the Graduate School information page.

Graduate School Scholarships

Deadline: 31 January 2025.

The College of Science & Engineering has a number of PhD research scholarships, for which academically excellent candidates, home/EU and overseas, are encouraged to apply. Applicants must hold at least an upper second class honours degree or equivalent. The value of the scholarship award includes fees at the home and international (including EU) fee rate and a maintenance award commensurate with Research Council guidelines. These scholarships are highly competitive.

For further details, contact socs-pgr-enquiries@glasgow.ac.uk or see College of Science & Engineering PGR scholarships.

China Scholarship Council (CSC) scholarships

Deadline: 31 January 2025.

This scheme provides academically excellent Chinese students with the opportunity to study for a PhD at the University of Glasgow. The scholarships are supported jointly by the China Scholarship Council and the University of Glasgow. The School of Computing Science is typically awarded a few scholarships per year depending on the quality of the applicants.

Further information can be found on the College of Science and Engineering website.

James McCune Smith PhD Scholarship

The James McCune Smith Scholarships fund Black UK domiciled students to undertake PhD research at the University in any research area in which we can offer supervision. They provide an enhanced experience through external mentors, placements, leadership training, community-building activities and networking opportunities.

More details are avilable at James McCune Smith PhD Scholarships.

Other scholarship sources

Applicants may find the following sources of information useful in seeking funding:

For international applicants:
  • British Council: applicants are advised to consult their local British Council Office for information on scholarship opportunities.
  • Chevening Scholarships
  • Commonwealth Shared Scholarship Scheme

Project proposals

Education and Practice (EAP)

Widening participation and inclusivity in Theoretical Computer Science education - Dr Oana Andrei

This PhD project seeks to make advanced topics in theoretical computer science (TCS) - such as formal methods, complexity theory, and cryptography - more accessible and inclusive for students with diverse backgrounds and experiences. Recognising that students often approach these subjects with varying levels of mathematical preparation and preconceptions, this research will develop and implement inclusive teaching strategies designed to reduce barriers and foster engagement in TCS courses.

The project leverages Universal Design for Learning (UDL) principles and cognitive theories to create flexible, accessible approaches that support students from all backgrounds. Inclusive curriculum components will address common preconceptions about TCS, offer scaffolding to accommodate different levels of mathematical and logical reasoning, and connect abstract concepts to real-world applications to build relevance and interest.

The PhD candidate will work closely with faculty to design, implement, and evaluate these inclusive practices within TCS courses. This opportunity is ideal for applicants interested in TCS, educational research, and inclusive pedagogy. 

______________________________________________________________________________________________________

Integrating Socially Responsible Computing in Theoretical Computer Science education - Dr Oana Andrei

This research aims to incorporate Socially Responsible Computing (SRC) principles into the Theoretical Computer Science (TCS) curriculum, emphasising the societal and technological impacts of foundational computing concepts. TCS underpins critical areas such as algorithms, cryptography, and complexity theory, which have broad real-world implications - from data privacy and fairness to environmental sustainability. Despite this, TCS education typically focuses on formalism and abstraction, often overlooking the ethical and social dimensions of these theories when applied in practice.

This project will identify key SRC competencies relevant to TCS, such as ethical responsibility, sustainability, and technological fairness, guided by theories such as Transformative Learning, Constructivist Learning, Reflective Practice, and Threshold Concepts (or other relevant theoretical frameworks). It will then develop and implement pedagogical interventions that embed these competencies into TCS courses, ensuring that students gain an understanding of the societal impact of their theoretical knowledge. The project will assess the effectiveness of these interventions on student learning and ultimately produce a scalable framework that enables TCS educators to integrate SRC into their curricula effectively. 

This opportunity is ideal for applicants interested in exploring the intersection of computer science theory and social responsibility, with a strong commitment to advancing ethical, sustainable, and socially impactful computing education.

______________________________________________________________________________________________________

Developing competencies in algorithmic verification using verification-aware tools - Dr Oana Andrei

This PhD project will develop and evaluate a competency-based framework for teaching formal verification and algorithmic thinking skills using verification-aware tools, with Dafny or an equivalent verification-aware language as a case study. The project will focus on designing scalable teaching methods that introduce students to formal verification concepts and allow easy adaptation to different verification tools. Guided by theories such as Competency-Based Learning, Cognitive Apprenticeship, and Constructivist Learning, the research will deliver insights into using verification tools effectively in educational settings, as well as a toolkit for integrating formal verification with minimal disruption to existing curricula.

This opportunity is ideal for applicants interested in combining formal methods, algorithmic reasoning, and computing education research to develop innovative tools and frameworks for enhancing verification and problem-solving skills in AI-assisted learning environments.

______________________________________________________________________________________________________

Integrating AI-assisted tools with verification-aware languages to support algorithmic thinking - Dr Oana Andrei

This project will investigate how AI-assisted tools can enhance the learning of verification skills in computing, using Dafny or an equivalent verification-aware programming language as a pilot. The PhD candidate will design AI-supported features (e.g., suggestions, error detection) within the verification tool environment, evaluating how these features influence students' formal reasoning skills and algorithmic competencies. Findings will guide future use of AI tools in formal methods education and support a framework for integrating AI with verification-aware languages across different learning contexts.

This opportunity is ideal for applicants interested in combining formal methods, algorithmic reasoning, and computing education research to develop innovative tools and frameworks for enhancing verification and problem-solving skills in AI-assisted learning environments.

______________________________________________________________________________________________________

Assessing agile software engineering culture in industry - Dr Peggy Gregory

It is often assumed that the benefits of agile methods can be achieved through the adoption of a set of best practices. However, there is increasing evidence that in order to succeed in being agile and adaptable, software engineering teams need to address and change deeper levels of their working culture. A number of models of agile culture have been developed that help to map out the features of an agile culture, but there has been little work investigating how that culture develops in different contexts and how it changes.

This project will explore how useful agile culture models are at assessing software engineering culture, identifying areas for improvement and whether there are gaps in our understanding of what makes a good working culture for software engineering teams.

______________________________________________________________________________________________________

The role of generative AI in agile software engineering industry practice - Dr Peggy Gregory

The growth of generative AI has the potential to significantly transform software development in industry by speeding up repetitive tasks and enabling software engineers to focus on the most important aspects of the job. Agile approaches to software engineering address many of the socio-technical aspects of tech development such as customers' needs, developers' performance and team dynamics. Generative AI is already being used widely by software engineers to automate and enhance software development tasks, but there are many unanswered questions about the acceptability of these approaches and what happens at the interface between AI and human software development work.

This project will explore the efficacy of using generative AI to enhance the human dimensions of software engineering practice, with a particular focus on customer collaboration, team work, and assessing quality. Taking an empirical approach the research will explore where the boundaries are for the use of AI in these environments, looking at utility, risk and accountability.

______________________________________________________________________________________________________

Agile techniques for sustainable software engineering in industry - Dr Peggy Gregory

Software engineering sustainability is a broad concept that encompasses a variety of aspects including the capability of software to endure, minimising energy use, improving working conditions in teams, and software quality. Agile methods introduce iterative, incremental and team-based approaches to developing software that have been widely adopted in industry. Both agility and sustainability are systems-thinking approaches that look holistically at problems and solutions. They are also both acknowledged to be increasingly relevant in todays software-intensive business environment.

This project will investigate what and how agile techniques for sustainability can be adapted and used in software engineering industry settings. This will involve identifying and developing agile sustainability techniques and working with industry partners to assess the viability and effectiveness of their use.

______________________________________________________________________________________________________

Using Generative AI to enable Mastery Learning in Computing Education - Dr Syed Waqar Nabi

This proposed project is based on the assumption that generative AI (gen AI) can address key challenges in learning for mastery for computing subjects, not just introductory programming, which is a well-explored area, but more advanced subjects too. This could be done by providing personalised, adaptive learning experiences, including varied instructional materials, one-on-one tutoring, repeated testing opportunities, and immediate feedback.

The project could initially involve investigations into the domains of: master learning pedagogy, computing science education, and use of gen AI in education. Based on this analysis, a set of requirements could created based on a specific use-case (e.g. in the context of a course offered by Computing Science at Glasgow University), and then a web-based application could be prototyped, tested, and evaluated with real users. The project could also explore this problem from an ethical, legal, and sustainability perspective.

______________________________________________________________________________________________________

Addressing Disparities in Computing Higher Education – Challenges and Opportunities in the World of AI - Dr Syed Waqar Nabi

This project will seek to address the disparities in competency and quality of university graduates in Low- and Middle-Income Countries (LMICs). A competent workforce in software engineering (SE) is increasingly important for economic and social development. At the individual level, competency in this domain places one in a strong position to find fulfilling, dignified and fruitful work, leading to other positive changes for the individuals and their dependents. For a country, a competent workforce in this area has a direct impact on its ability to tackle challenges like reducing poverty, improving health and well-being, encouraging innovation, and reducing inequalities, and an indirect impact on its ability to overcome many other societal challenges. In short, there is a strong connection between software engineering skills and positive outcomes for the individual and the society.

Many universities, especially in LMICs, are unable to produce graduates with the skills that the industry requires, and it has been observed that what many universities seem to be teaching students is decoupled from industry expectations. While this industry-academia gap is not unique to LMICs, it is more acute in such countries, compounded by a deficit in the quality of education when compared with more developed countries.

Against this backdrop, this project will explore how generative AI might be used in the specific context of LMICs to improve the competency of computing science and software engineering graduates, with an aim to remove disparities, aligning with UN SDG # 4 and 10. The project will look at technical, ethical, pedagogical and sustainability challenges.

Formal Analysis, Theory and Algorithms (FATA)

Practical sortings for bigraphs - Dr Michele Sevegnani

Bigraphs are a universal mathematical model for representing the spatial configuration of physical or virtual objects and their interaction capabilities. They were initially introduced by Robin Milner [1] and then extended to bigraphs with sharing in [2] to accommodate spatial locations that can overlap. A bigraph consists of a pair of relations over the same set of entities: a directed acyclic graph, called the place graph, representing topological space in terms of containment, and a hyper-graph, called the link graph, expressing the interactions and (non-spatial) relationships among entities. Each entity is assigned a control that determines its number of links and whether it is atomic, i.e. it cannot contain other entities.

In most applications of bigraphs, it is useful to restrict the set of admissible bigraphs. This is achieved by classifying controls through sorts. Intuitively a sorting scheme assigns a sort to each entity type and a set of constraints on these sorts. For example, we may specify that a sensor node cannot contain another sensor node. Currently, no tool provides support for working with sorting schemes, including checking a bigraph matches a given scheme, and instances must be verified by hand.

The goal of this project is to extend the theory of sortings for bigraphs [3,4] by taking advantage of the latest research in spatial logic [5,6]. In particular, we will define suitable spatial modalities over the place and link graphs to allow end-users to easily specify common constraints. Novel verification and typing algorithms will be integrated into the BigraphER toolchain [7].

References

  1. Milner, Robin. "Bigraphs and their algebra." Electronic Notes in Theoretical Computer Science 209 (2008): 5-19.
  2. Sevegnani, Michele, and Muffy Calder. "Bigraphs with sharing." Theoretical Computer Science 577 (2015): 43-73.
  3. Birkedal, Lars, Søren Debois, and Thomas Hildebrandt. "Sortings for reactive systems." CONCUR 2006–Concurrency Theory: 17th International Conference, CONCUR 2006, Bonn, Germany, August 27-30, 2006. Proceedings 17. Springer Berlin Heidelberg, 2006.
  4. Birkedal, Lars, Søren Debois, and Thomas Hildebrandt. "On the construction of sorted reactive systems." CONCUR 2008-Concurrency Theory: 19th International Conference, CONCUR 2008, Toronto, Canada, August 19-22, 2008. Proceedings 19. Springer Berlin Heidelberg, 2008.
  5. Ciancia, Vincenzo, et al. "Spatial logic and spatial model checking for closure spaces." Formal Methods for the Quantitative Evaluation of Collective Adaptive Systems: 16th International School on Formal Methods for the Design of Computer, Communication, and Software Systems, SFM 2016, Bertinoro, Italy, June 20-24, 2016, Advanced Lectures 16 (2016): 156-201.
  6. Conforti, Giovanni, Damiano Macedonio, and Vladimiro Sassone. "Spatial logics for bigraphs." Automata, Languages and Programming: 32nd International Colloquium, ICALP 2005, Lisbon, Portugal, July 11-15, 2005. Proceedings 32. Springer Berlin Heidelberg, 2005.
  7. Sevegnani, Michele, and Muffy Calder. "BigraphER: Rewriting and analysis engine for bigraphs." Computer Aided Verification: 28th International Conference, CAV 2016, Toronto, ON, Canada, July 17-23, 2016, Proceedings, Part II 28. Springer International Publishing, 2016.

Contacts

  1. michele.sevegnani@glasgow.ac.uk

https://www.dcs.gla.ac.uk/~michele/

____________________________________________________________________________

Solving Hard Problems, and Proving We Did It Correctly - Dr Ciaran McCreesh

Constraint programming is a technique used to solve problems where we have some resources, some constraints on how we may use combinations of resources, and an objective (like "use every resource", or "maximise profit"). In theory, such problems are hard, but in practice, constraint programming solvers can quickly tackle large industrial problem instances involving timetabling, scheduling, resource allocation, and vehicle routing, as well as logic and newspaper puzzles like Sudoku. This project revolves around designing and implementing better algorithms for these solvers.

Although constraint programming solvers are hugely successful in practice, we do not really understand why they work so well. One aspect of this project could be to use empirical algorithmics and large-scale scientific experiments to try to get a better idea of how solvers are able to reach their conclusions.  The idea is to augment an algorithm implementation so that it takes measurements as it runs, and then try to work out why the theoretical worst-case exponential behaviour doesn't happen.

Another problem is trusting these solvers. Because they make use of intelligent reasoning techniques (like when a human solves a Sudoku puzzle), conventional software testing techniques have not been very successful at identifying bugs.  An upcoming alternative is known as "proof logging" or "certifying": the idea is that alongside a solution, a solver should output an independently verifiable mathematical proof that this solution is correct. This does not guarantee that a solver is bug-free, but it does mean that if a solver ever produces an incorrect answer, it can be detected (even if it is due to hardware or compiler errors). Another aspect of this project could be to develop proof logging for new kinds of reasoning.

A Glasgow-Singapore scholarship is available to work on this kind of problem together with Michele Sevegnani and a collaborator at A*STAR, including a year-long secondment to Singapore.

Contact: email, web

____________________________________________________________________________

Reusable Rich Techniques for Parallel Constraint Programming - Dr Ciaran McCreesh

Intelligent constraint programming (CP) algorithms are vital to all developed economies. Vanilla search algorithms (i.e. good old fashioned chronological backtracking) already use advanced techniques to reduce the search space such as search order heuristics, accumulating knowledge (nogoods), and pruning non-beneficial search tasks.  Rich search algorithms (discrepancy-based search, restart search, Luby, back jumping, learning while searching) incorporate sophisticated techniques such as maintaining complex knowledge bases, search restarts and adaptive (impact and fuzzy) search heuristics.

Many CP problems are computationally hard, and would benefit from parallelism, but implementation difficulties mean that most modern solvers are either not parallelised at all, or are parallelised using ad-hoc methods aimed at a single scale of parallelism and specific family of instances.

A combination of technologies is now available that allow a more general approach, namely: parallelism frameworks that allow high performance implementations to be created for a multi-scale architectures; search specific dynamic schedulers that can preserve search heuristics and manage highly irregular search tasks; and algorithmic skeletons that allow common patterns of parallel coordination to be encoded.

By combining these technologies we speculate that it is possible to achieve a step change in CP solver technology by developing a framework for parallel search that works at many scales, from desktop to large cluster, and with rich  search techniques.

The aims of the project are as follows. (1) The design of parallel CP algorithms. (2) To understand the interaction between parallelism and rich CP search techniques. (3) To understand the tradeoffs between performance, scalability, and reproducibility in parallel CP search. (4) To explore abstractions for designing parallel CP algorithms.

The project will run as a collaboration between the FATA and GLASS sections, and with both experienced and youthful supervisors.

Contact: Blair Archibald, Ciaran McCreeshPhil Trinder

 


Efficient Algorithms for Matching Problems Involving Preferences - Professor David Manlove

Matching problems involving preferences are all around us: they arise when agents seek to be allocated to one another based on ranked preferences over potential outcomes.  Examples include the assignment of junior doctors to hospitals, kidney patients to donors, and children to schools.  For example, in the first case, junior doctors may have preferences over the hospitals where they wish to undertake their training, and hospitals may rank their applicants based on academic merit.

In many of these applications, centralised matching schemes produce allocations to clear their respective markets.  One of the largest examples is the National Resident Matching Program in the US, which annually matches around 40,000 graduating medical students to their first hospital posts.  At the heart of schemes such as this are algorithms for producing matchings that optimise the satisfaction of the agents according to their preference lists.  Given the number of agents typically involved, it is of paramount importance to design efficient (polynomial-time) algorithms to drive these matching schemes.

There are a wide range of open problems involving the design and analysis of algorithms for computing optimal matchings in the presence of ranked (or ordinal) preferences.  Many of these are detailed in the book “Algorithmics of Matching Under Preferences” by David Manlove (http://www.optimalmatching.com/AMUP).  This PhD project will involve tackling some of these open problems with the aim of designing new efficient algorithms for a wide range of practical applications.  There will also be scope for algorithm implementation and empirical evaluation, and the possibility to work with practical collaborators such as the National Health Service.

The importance of the research area was recognised in 2012 through the award of the Nobel Prize in Economic Sciences to Alvin Roth and Lloyd Shapley for their work on algorithms for matching problems involving preferences.

Contact: emailweb.


Model checking UAVs - Professor Alice Miller

Increasingly, software controlled systems are designed to work for long periods of time without human intervention. They are expected to make their own decisions, based on the state of the environment and knowledge acquired through learning. These systems are said to be autonomous and include self-driving cars, autonomous robots and unmanned aerial vehicles (UAVs).

The verification of autonomous systems is a vital area of research. Public acceptance of these systems relies critically on the assurance that they will behave safely and in a predictable way, and formal verification can go some way to provide this assurance. In this project we aim to embed a probabilistic model checking engine within a UAV, which will enable both learning and runtime verification.

Model checking is a Computing Science technique used at the design stage of system development. A small logical model of a system is constructed in conjunction with properties expressed in an appropriate temporal logic. An automated verification tool (a model checker) allows us to check the properties against the model. Errors in the model may indicate errors in the system design. New verification techniques allow this process to take place at run-time, and so enable us to analyse possible future behaviours and the best course of action to take to achieve a desired outcome. For example, a verification engine on board a UAV might allow us to determine the correct response of the UAV during extreme weather conditions in order to avoid a collision. Or it might predict the best route to take to achieve its mission, given the likelihood of failure of some of the UAV components.

Objectives:

  • Identify and refine existing runtime verification techniques suitable for in-flight UAVs
  • Implement verification methodology demonstrator on board a UAV (developed in the School of Engineering at Glasgow)

Contact: emailweb.


Exploiting Graph Structure Algorithmically – Dr Kitty Meeks and Dr Jess Enright

Graphs – otherwise known as networks – are an extremely powerful structure for representing diverse datasets: any data that involves relationships between pairs of objects is naturally represented in this way. They can, for example, represent which pairs of people are acquainted in a social network, which pairs of cities are connected by a direct transport link, or which pairs of farms sell animals to each other.

Once we have represented our data as a graph, we want to answer questions about it. What is the largest group of people who all know each other? What is the fastest or cheapest route for visiting a certain set of cities? Where should we monitor a livestock trade network to detect a disease outbreak as soon as possible? The bad news is that many questions of this type are computationally intractable – algorithms become prohibitively slow on large instances – if we need to solve them on arbitrary graphs.

However, not all graphs are the same. A social network graph has different structural properties from a transport network graph, which in turn has different properties from a livestock trade graph. In many cases we can leverage an understanding of the mathematical structure of the graphs to design algorithms which scale much better when applied to graphs having these specific properties. Either (or both) theoretical and experimental study can help us understand when and how these algorithms will be practically useful.

We are particularly interested in temporal graphs that change over time, and multilayer graphs that model qualitatively different types of connections (in different 'layers').

A PhD will involve addressing one or more case studies in this area, with specific applications potentially drawn from Medicine, Epidemiology, Social Network Analysis or Statistics. For any case study, the research might involve identifying structural parameters of the relevant graphs, exploiting these to design efficient “FPT” algorithms, and/or implementing and testing the algorithms on real data.

Contact: emailweb (Kitty) or  emailweb (Jess)

____________________________________________________________________________

Behavioural Types for Actor Languages - Dr Simon Fowler

Actor languages like Erlang and Elixir have proved to be effective tools for writing scalable and reliable distributed systems. Behavioural type systems extend usual type system guarantees (e.g., "have I tried to multiply an integer by a string?") to more fine-grained guarantees about program behaviour (e.g., "does this program correctly follow a communication protocol?").

The EPSRC-funded STARDUST project (https://epsrc-stardust.github.io/) investigates behavioural type systems for reliable distributed applications. The goal of STARDUST is to allow developers to write resilient, failure-tolerant distributed applications in actor languages, while using behavioural types to support strong correctness properties guarantees on communication behaviour.

Current work at Glasgow has focused on two directions: the first uses *mailbox types* [1, 2] to reason about the messages contained in the message queue for a process, and the second uses *multiparty session types* [4] to reason about adaptable [3] and event-driven actors.

There are several possible directions for a PhD project in each area. Please get in touch to discuss further.

Mailbox Types
  • How do we effectively compile programs using first-class mailboxes to
    standard actor runtimes?
  • Can we design more flexible constraint-based algorithmic type systems for mailbox types?
  • Can we design flexible analyses which avoid unsafe mailbox aliasing?
  • How do we extend mailbox type systems to handle failure, and how do mailbox types interact with idioms such as supervision hierarchies?
Session Types
  • Can we integrate static checking of actor-based multiparty session types with languages / frameworks such as Erlang or Akka
  • Are there constructs we can add to multiparty session types to ease the burden of writing failure-tolerant code?

References:

[1] Ugo de' Liguoro and Luca Padovani. Mailbox Types for Unordered Interactions. ECOOP 2018.

[2] Simon Fowler, Duncan Paul Attard, Franciszek Sowul, Phil Trinder, and Simon J Gay.
Special Delivery: Programming with Mailbox Types. Under review, 2023.

[3] Paul Harvey, Simon Fowler, Ornela Dardha, and Simon J. Gay.
Multiparty Session Types for Safe Runtime Adaptation in an Actor Language.
ECOOP 2021.

[4] Kohei Honda, Nobuko Yoshida, and Marco Carbone.
Multiparty Asynchronous Session Types. POPL 2008.

__________________________________________________________________________

Session Types for Self-Adaptive Systems - Dr Simon Fowler, Dr Paul Harvey

Since the 1960s, computer hardware has evolved from single resource-poor machines, to many interconnected machines with diverse resource availability and
roles; the Internet of Things and sensor systems are particular examples.

From a software perspective, this complexity has made it increasingly difficult to guarantee that software is working correctly: both for writing the software and
also accounting for the different failure and error conditions that can occur. To account for these dynamic programming environments, a popular approach is to consider software which is able to self-adapt, meaning that software that can change its behaviour based on its operating environment. Although self-adaptation is a powerful approach, it can give rise to situations where software is no longer behaving as the programmer intended, leading to potentially unsafe and unstable operation.

In previous work we have shown how *multiparty session types* can be used to guarantee correct communication *even in the presence of self-adaptation*. Although this is gives us strong guarantees, several problems remain:

  • The approach requires a *top-down* specification of the interactions in a system; this approach limits self-adaptation and requires engineering a project from scratch with this methodology in mind, or making substantial edits to the existing system.
  • The current work on self-adaptation involves substantial restrictions on a participant's communication behaviour.
  • Although our approach allows *replacement* of a component's behaviour, the replaced behaviour must have the *same* communication behaviour, which is highly restrictive.
  • All participants need to be discoverable; instead we would like to have the possibility of continuing even if some participants cannot be found.
  • Components must all currently be written in the same language.

The purpose of this PhD is to explore bottom-up, on-demand session typing for self-adaptive systems.  We expect there to be a particular focus on developing type systems, methods for runtime protocol compatibility checking, and/or distributed runtime infrastructure for ensuring safe communication between adaptive components.

This project is jointly supervised by Dr Simon Fowler and Dr Paul Harvey and would be suitable for students across the theory-practice spectrum; we would be
happy to agree a precise workplan based on the interests and skills of the student. Candidates are strongly recommended to contact us for an informal
discussion prior to making an application.

Contact: Dr Simon FowlerDr Paul Harvey

__________________________________________________________________________

Structural and Algorithmic Results in the Student-Project Allocation Problem - Dr Sofiat Olaosebikan

Matchings under preferences have been a subject of profound interest in the fields of computer science, economics, and operations research, due to its relevance in various real-world applications. The evolution of research in this domain traces back to the seminal 1962 paper by Gale and Shapley, whose foundational work on stable allocations and market design was later recognized with the 2012 Nobel Prize in Economic Sciences awarded to Alvin Roth and Lloyd Shapley. Their theoretical framework has proven invaluable for solving real-world matching problems, including, matching pupils to schools, junior doctors to hospitals, and pairing donor kidneys with transplant patients.

Building on this rich theoretical foundation, this project will investigate the Student-Project Allocation problem (SPA), a two-sided matching problem where students and lecturers have ranked preferences over potential allocations. While the SPA model emerged from university project assignments, its applications extend far beyond academia, offering insights into resource management across diverse domains including wireless networks, cloud computing, and distributed systems. The mathematical structures underlying SPA have broad implications for optimization problems in both theoretical computer science and practical applications.

This PhD project will generate novel structural and algorithmic results for the SPA variant where lecturers can have preferences over students (SPA-S), leveraging and extending the fundamental result that the set of stable matchings forms a distributive lattice and admits a rotation poset. Key research objectives include developing efficient algorithms for finding all stable pairs, enumerating and counting stable matchings, and identifying stable matchings that satisfy additional optimality criteria. The project will then extend these theoretical and algorithmic contributions to address the more complex variant where preferences can involve ties, an extension that better reflects real-world scenarios where strict rankings may not be feasible or desirable.

Glasgow Systems Section (GLASS)

Low-Carbon Federated Learning in Widely Distributed IoT Systems - Dr Lauritz Thamsen, Dr Paul Harvey

Federated learning (FL) enables model training across widely distributed devices without sharing training data. This aligns well with many applications that run on widely distributed, heterogeneous compute and communication IoT resources (e.g. cyber-physical transportation systems, water infrastructures, energy grids, and healthcare applications). Take for example a fleet of electric vehicles (EVs) used for goods delivery. EVs are commonly equipped with substantial computational resources and often run numerous AI/ML applications to assist their drivers. To train the models for these applications in a privacy-aware and communication-efficient manner, fleets of EVs can employ Federated Learning.

However, FL introduces inefficiencies compared to centralized model training. At the same time, EVs could be charged using a variety of energy sources, including regional grids, distributed renewables, and energy storage. This offers opportunities to align the significant, but flexible computational load of FL training with the availability of low-carbon grid energy or on-site renewable energy.

A PhD project could go beyond the previous work in this area by focusing explicitly on the large-scale, widely distributed training setting of critical cyber-physical infrastructure systems and, at the same time, break up some of the simplifying assumptions of previous work (e.g. https://doi.org/10.1145/3632775.3661970, https://arxiv.org/abs/2310.17972, https://doi.org/10.1145/3632775.3639589).

 

Contact: Dr Lauritz Thamsen (https://lauritzthamsen.org/) and Dr Paul Harvey (http://www.paul-harvey.org/)

__________________________________________________________________________________________________________

 

Carbon-Aware Edge Computing - Dr Lauritz Thamsen, Dr Yehia Elkhatib

This PhD project will research how novel distributed computing paradigms (such as edge computing) and distributed energy generation (from sources such as wind and solar) can be combined to enable low-carbon and sustainable computing.

As data analytics and machine learning applications are increasingly deployed throughout our cities, factories, and homes, the computing infrastructure for these applications is becoming more distributed and diverse. That is, the intelligent and cyber-physical systems of the Internet of Things will not be implemented with centralized cloud resources alone. Such resources are simply too far away from sensors and devices, leading to high latencies, bandwidth bottlenecks, and unnecessary energy consumption. Additionally, there are often privacy and safety requirements mandating distributed architectures. Therefore, new distributed computing paradigms – such Edge Computing aim to provide computing and storage closer to data sources and users.

Meanwhile, the IT industry is starting to recognize its increasing environmental footprint and organizations are setting carbon emission targets on the way to net zero emissions. One approach towards more sustainable and cost-effective computing is equipping edge and cloud infrastructure with on-site renewable energy sources. In addition, the emissions associated with consuming energy from public grids also fluctuate, depending on when and where the energy is consumed. This allows for reducing the emissions of computing by scheduling applications based on the expected carbon intensity of the available energy. This emerging area is known as Carbon-Aware Computing.

Effectively leveraging differences in the carbon intensity of energy systems for computing applications is far from trivial, though, and hence, a good topic for a PhD project. It requires integrating state-of-the-art energy forecasts as well as accurate estimates of the performance and power consumption of applications and compute resources. Furthermore, it should take possible errors in energy forecasts (e.g. due to unforeseen spikes in demand) and performance estimates (e.g. due to hardware/software failures or interference) into consideration. Thus, carefully planning the placement and scheduling of applications is not enough. Instead, applications running across edge and cloud resources will need to be monitored and adjusted at runtime as needed. For example, if an edge site is supplied with energy generated by solar panels, cloudy weather can easily make it necessary to offload applications to cloud data centres, as these are often much more efficient in using energy for computation, so less grid energy could be consumed in the cloud. However, the benefits of offloading need to outweigh the costs of moving a job and its data.

We expect this PhD project to investigate integrating (1) performance and power models and (2) grid carbon intensity and renewable energy forecasts for (3) carbon-aware dynamic resource allocation and offloading across edge and cloud environments.

Contact: Dr Lauritz Thamsen (https://lauritzthamsen.org/) and Dr Yehia Elkhatib (https://yelkhatib.github.io/)


Improving Internet Protocol Standards - Dr Colin Perkins

The technical standards underpinning Internet protocols are described in the RFC series of documents (https://www.rfc-editor.org/). These are supposed to provide accurate descriptions of the protocols used in the network, from which implementors can work. However, they’re often written in an informal and imprecise style, and interoperability and security problems frequently arise because the specifications contain inconsistent terminology, describe message syntax in imprecise ways, and specify protocol semantics informally or by example. The authors of the specifications tend to be engineers, expert at protocol design, but not at writing clear, consistent, specifications. Further, specifications are increasing written, and read, by those for whom English is not their native language, further complicating the issue.

Formal languages and tools for protocol specification and verification have been available for many years, but have not seen wide adoption in the Internet standards community. This is not because such tools offer no benefit, but because they have a steep learning curve. The benefits offered in terms of precision and correctness are not seen to outweigh the complexity in authoring and reading the resulting standards.

The goal of this project is to explore semi-formal techniques that can be used to improve protocol specifications, at a cost that’s acceptable to the engineers involved in the standards community. This will involve tools to parse protocol specifications and to encourage authors towards the use of structured English vocabulary, that is both precise for the reader, especially the non-native speaker, and offers some ability for machine verification of protocol properties and generation of protocol code. Success will not be perfection, but rather uptake by the community of tools and novel techniques that improve specification clarity, and help ensure correctness

Contact: emailweb

__________________________________________________________________________________________________________

Reducing Runtimes and Carbon Emissions by Better Managing NUMA - Professor Phil Trinder

As the number of general purpose cores in an architecture rises above 16, only Non Uniform Memory Access architectures (NUMA) can provide fast access to shared memory. NUMA reduces memory contention by partitioning memory into regions where access to a local region is faster and higher bandwidth than to a remote region. NUMA are becoming widespread, e.g. in servers, desktops and laptops. 

Languages with automatic memory management, like Python, Java and Haskell, are increasingly popular, but there has been little systematic study of the impact of NUMA on them. Hence they present both  

A Challenge:- In languages with explicit memory management, like C/C++, the programmer laboriously lays out memory to exploit every new NUMA.  Languages with automatic memory management give the programmer far less control over memory and can pay higher NUMA performance penalties (10-20%). 

An Opportunity:- In languages with automatic memory management the 

implementation is free to reorganise memory  to optimise access patterns based on observed behaviour. Moreover the program will not require refactoring for new NUMAs. 

Research Aims (1) To reach a deep understanding of the performance challenges posed by NUMA for languages with automatic memory management. (2) To investigate how to exploit this knowledge to develop better language implementation technologies, and hence improve the performance of programs in some representative language on  NUMA. If there are 15 billion JVMs a 5% reduction in runtimes is a huge reduction in cost and carbon emissions!  

The project will run within the vibrant Glasgow Parallelism research Group http://www.dcs.gla.ac.uk/research/gpg/ with both experienced and youthful supervisors

Conotact: email

__________________________________________________________________________________________________________

Reusable Rich Techniques for Parallel Constraint Programming - Professor Phil Trinder

Intelligent constraint programming (CP) algorithms are vital to all developed economies. Vanilla search algorithms (i.e. good old fashioned chronological backtracking) already use advanced techniques to reduce the search space such as search order heuristics, accumulating knowledge (nogoods), and pruning non-beneficial search tasks.  Rich search algorithms (discrepancy-based search, restart search, Luby, back jumping, learning while searching) incorporate sophisticated techniques such as maintaining complex knowledge bases, search restarts and adaptive (impact and fuzzy) search heuristics.

Many CP problems are computationally hard, and would benefit from parallelism, but implementation difficulties mean that most modern solvers are either not parallelised at all, or are parallelised using ad-hoc methods aimed at a single scale of parallelism and specific family of instances.

A combination of technologies is now available that allow a more general approach, namely: parallelism frameworks that allow high performance implementations to be created for a multi-scale architectures; search specific dynamic schedulers that can preserve search heuristics and manage highly irregular search tasks; and algorithmic skeletons that allow common patterns of parallel coordination to be encoded.

By combining these technologies we speculate that it is possible to achieve a step change in CP solver technology by developing a framework for parallel search that works at many scales, from desktop to large cluster, and with rich  search techniques.

The aims of the project are as follows. (1) The design of parallel CP algorithms. (2) To understand the interaction between parallelism and rich CP search techniques. (3) To understand the tradeoffs between performance, scalability, and reproducibility in parallel CP search. (4) To explore abstractions for designing parallel CP algorithms.

The project will run as a collaboration between the FATA and GLASS sections, and with both experienced and youthful supervisors.

email: Blair Archibald, Ciaran McCreeshPhil Trinder


Costed Computational Offload - Dr Jeremy Singer, Professor Phil Trinder

Computations now often execute in dynamic networks with a range of compute devices available to them. For example a mobile robot may perform route planning or image analysis computations using its own resources, or might offload the computation to a nearby server to obtain a better result faster.

We have already developed cost models to minimise execution time by controlling the movement of mobile computations in networks. Such a computation periodically recalculates network and program parameters and will move if the estimated time to complete at the current location Th exceeds the time to complete at the fastest available location Tn plus the communication time Tc, that is Th > Tn + Tc.

The purpose of the PhD research is to develop and implement appropriate models to decide when and where to offload computation, and how much work to do. A key challenge is to monitor a dynamic network, e.g. can we predict how long a computational resource will be available from past network behaviour? Another challenge is to develop and implement appropriate models that scale the computation. For example how detailed should the offloaded planning activity be? If the computation takes too long we risk losing the connection.

The project will run within the vibrant Glasgow Parallelism research Group http://www.dcs.gla.ac.uk/research/gpg/ with both experienced and youthful supervisors, and is associated with the EPSRC AnyScale Apps project http://anyscale.org/

Contact: emailweb


Compact Routing for the Internet Graph - Dr Colin Perkins

Internet routing algorithms do not scale. This project will build on a class of centralised algorithms, known as compact routing algorithms, that have appealing theoretical scaling properties, and develop them to form practical distributed algorithms and protocols that scale in real-world topologies, while also supporting traffic engineering and routing policies.

The currently deployed Internet routing protocol is BGPv4. This is a path vector protocol that, in the absence of policy constraints, finds shortest path routes, but that offers a wide range of tools to enforce routing policy and influence the chosen routes. Due to the underlying shortest path algorithm, however, state requirements for each node in the BGP routing graph have been proven to scale faster than linearly with the size of the network (i.e., with the number of prefixes). This has been manageable until now because the limited size of IPv4 address space has constrained the number of prefixes. However, with the uptake in deployment of IPv6, this can no longer be guaranteed, and we need to find long-term scalable routing protocols.

The so-called compact routing algorithms achieve sub-linear scaling with the size of the network by abandoning shortest path routing, and using landmark-based routing. While the theoretical worst case stretch for compact routing relative to shortest path routing is large, previous work in the School has demonstrated that the stretch achieved on the Internet graph is not significant, and has developed a distributed algorithm for landmark selection. This project will extend this work to develop a fully distributed version of the compact routing algorithm, and realise it as a prototype routing protocol that could, conceivably, be deployed on the Internet as a long-term scalable routing solution.

Contact: emailweb


Peer-to-peer and Real-time Traffic Over QUIC - Dr Colin Perkins

The QUIC protocol, originally developed by Google but currently being standardised in the IETF, is a next-generation Internet transport protocol. The primary use case is to replace TCP and TLS-1.3 as the transport for HTTP/2 traffic, increasing security, reducing latency, and solving some problems with mobility. It is ideal for web traffic and streaming video, but is not well suited to interactive real-time traffic (VoIP, video conferencing, VR and AR, gaming, etc.) or to peer-to-peer use.

This project will explore peer-to-peer use of QUIC for real time media, including NAT traversal, rendezvous, congestion control, FEC, and partial reliability. The objective is to develop an integrated solution that fits with the existing QUIC framework, while providing general-purpose mechanisms to help future applications. In the way that QUIC replaces TCP for web traffic, the goal here is to replace UDP and RTP for interactive real-time flows.

The work would involve close collaboration with the IETF and industry.

Contact: emailweb


Post Sockets – What is the Transport Services API of the Future? - Dr Colin Perkins

The Berkeley Sockets API is showing its age. Over its 35 year history, it has become the ubiquitous portable networking interface, allowing applications to simply make effective use of TCP connections and UDP datagrams. Now, though, as a result of changes in the network and new application needs, the limitations of the Sockets API are becoming apparent. Post Sockets is a project to re-imagine the network transport API in the light of many years experience, changes in the network, better understanding of transport services, new application needs, and new programming languages and operating system services.

Details can be found at https://csperkins.org/research/post-sockets/ – the work is being done in parallel to standards work in IETF, API developments in industry (e.g., Apple is implementing the IETF work in iOS), and developing APIs in new programming languages such as Rust and Go.

Contact: emailweb


Securing Future Networked Infrastructures through Dynamic Normal Behaviour Profiling - Professor Dimitrios Pezaros

Since its inception, the Internet has been inherently insecure. Over the years, much progress has been made in the areas of information encryption and authentication. However, infrastructure and resource protection against anomalous and attack behaviour are still major open challenges. This is exacerbated further by the advent of Cloud Computing where resources are collocated over virtualised data centre infrastructures, and the number and magnitude of security threats are amplified.

Current techniques for statistical and learning-based network-wide anomaly detection are offline and static, relying on the classical Machine Learning paradigm of collecting a corpus of training data with which to train the system. There is thus no ability to adapt to changing network and traffic characteristics without collecting a new corpus and re-training the system. Assumptions as to the characteristics of the data are crude: assuming measured features are independent through a Naïve Bayes classifier, or that projections that maximise the variance within the features (PCA) will naturally reveal anomalies. Moreover, there currently is no framework for profiling the evolving normal behaviour of networked infrastructures and be able to identify anomalies as deviations from such normality.

The overarching objective of this PhD project is to design in-network, learning-based anomaly detection mechanisms that will be able to operate on (and integrate) partial data, work in short timescales, and detect previously unseen anomalies. The work will bridge Machine and Reinforcement Learning with experimental systems research, and will evaluate the devised mechanisms over real-world virtualised networked environments and traffic workloads.

The student can focus on advancing the state-of-the-art in the learning processes, the requisite network programmability mechanisms, or both. For example, the project can focus on exploring recent advances in statistical ML to develop flexible probabilistic models that can capture the rapidly evolving view of the network. Or, it can focus on designing programmable dataplanes and application acceleration/offload frameworks that can support such advanced functionality running in-network and sustaining line-rate performance.

The research will be conducted as part of the Networked Systems Research Laboratory (netlab) at the School of Computing Science, and the student will be given access to actual Internet traffic traces, and a state-of-the-art networking testbed with fully programmable platforms at all software and hardware layers. The work will spread across some very vibrant and cross-disciplinary research areas, and the student will be equipped with highly demanded skills in Machine Learning, CyberSecurity and next generation network architectures.

Contact: email, web


Performance Verification for Virtualized and Cloud Infrastructures - Professor Dimitrios Pezaros

  • How do you verify the performance of your distributed applications?
    • How do you configure your Cloud-based network-server farm to deliver maximum throughput?
    • How do you know you are getting the performance you have paid for from your provider?

    The Internet has seen great success mainly due to its decentralised nature and its ability to accommodate myriad services over a simple, packet-switched communication paradigm. However, measurement, monitoring, and management of resources have never been a native part of the Internet architecture that prioritised efficient data delivery over accountability of resource usage.

    This has led to a global, complex network that has been notoriously hard to debug, to measure its temporal performance, and to verify that it delivers consistent service quality levels. The lack of such capabilities has so far been swept under the carpet due to the distribution of resources across the global Internet, and the over-provisioning of network bandwidth which has also been the main stream of revenue for network operators.

    However, the Internet landscape has been changing drastically over the past few years: the penetration of Cloud computing imposes significant centralisation of compute-network-storage resources over data centre infrastructures that exhibit significant resource contention; and providers’ revenue increasingly depends on their ability to differentiate, and offer predictable and high-performing services over this new environment. The increased collocation of diverse services and users over centralised infrastructures, as well as the many layers of virtualisation (VM, network, application, etc.) required to support such multi-tenancy make the development of always-on measurement and troubleshooting mechanisms a challenging research problem to tackle.

    The overarching objective of this PhD project is to design native instrumentation and measurement support for performance verification over virtualised collocation infrastructures. This will enable data centre operators to monitor and troubleshoot their (physical and virtual) infrastructure on-demand, and provide “network measurement as a service” to tenants through exposing appropriate interfaces. Application providers (tenants) will in turn be able to define measurement metrics and instantiate the corresponding modules to verify their applications’ performance, and to validate that their service level agreements with the hosting infrastructure providers are being honoured.

    The work will entail experimental research in the areas of Network Function Virtualisation (NFV) and Software-Defined Networking (SDN) with a view towards enabling programmable measurement at the various layers (and locations) of future virtualised infrastructures. For example, it will explore how network nodes can efficiently provide accounting and reporting functionality alongside their main forwarding operation; what support from the end-systems (and virtual machines) will be required in order to synthesise and deploy novel end-to-end performance verification services; and what the specification and execution interfaces of such programmable environment should be.

    The research will be conducted as part of the Networked Systems Research Laboratory (netlab) at the School of Computing Science and the student will be given access to a state-of-the-art virtualisation infrastructure and relevant platforms.  Through the strong experimental nature of this project, the student will contribute to a currently buzzing research area, and will be equipped with highly demanded expertise in virtualised systems design, development, and evaluation. Digging under the surface, this work can have transformative effects on the design of future converged ICT environments that will need to deliver high-performance services, and where the boundaries between network, end-system, and application are becoming increasingly blurry.

Contact: email, web


ProgNets 2.0 - Professor Dimitrios Pezaros

Active and programmable networks were a popular research area about 15 years ago but eventually faded due to security and isolation concerns (how do I trust someone else’s code to run on my router’s interface?), and the lack of adoption by the industry that was at the time making money from offering high-bandwidth products and services.

All this has now changed: resource (server, network) virtualisation has been pervasive, allowing the efficient sharing of the physical infrastructure; and network operators and service providers now try to differentiate based on services they offer over virtualised infrastructures. In this new landscape, Software-Defined Networking (SDN) has emerged over the past five years as a new paradigm for dynamically-configured next generation networks, and has already been embraced by major equipment vendors (e.g., HP, Cisco, etc.) and service providers (e.g., Google).

Fundamental to SDN is the idea that the whole control plane is abstracted from individual network nodes and all network-wide functionality is configured centrally in software. Switches and routers are therefore reduced to general-purpose devices (in contrast to the legacy, vertically-integrated and vendor-controller platforms) that perform fast packet switching and are being configured on-demand through a defined API (e.g., Openflow). All functionality that then controls the network (e.g., spanning tree computation, shortest-path routing, access control lists, etc.) is provided by a (set of) central controller(s), and the resulting rules are installed on the switches through the Openflow API. This separation between the network’s data and control planes is a first step in ‘softwarising’ future networks but is still a long way from enabling true programmability through softwarisation.

The overarching objective of this PhD project is to design next generation, fully programmable Software-Defined Networks above and beyond the current state-of-the-art. Currently, the main SDN implementation through Openflow lacks any support for real-time programmable service deployment, since it centralises all intelligence (and programmability) around a (set of) controller(s). Future, service-oriented architectures will need to provide data path programmability through distributing intelligence to the network nodes. This is the only way to support the deployment of real-time programmable services in the data path (e.g., distributed network monitoring and control, performance-based provisioning, anomaly detection, dynamic firewalls, etc.).

The work will entail experimental research in protocols and languages for network programmability, in switch architectures, and the software-hardware interface. It will explore platform-independent language representations and runtimes (e.g., bytecodes and intermediate representations) that can allow custom processing at the switches without requiring the manual extension of protocol fields to support new functionality and at the same time offer bound data forwarding performance. The work will also include the design of exemplar time-critical services that will benefit from such underlying network architecture.

The research will be conducted as part of the Networked Systems Research Laboratory (netlab) at the School of Computing Science and the student will be given access to a state-of-the-art SDN testbed with fully programmable platforms at all software and hardware layers. Through the strong experimental nature of this project, the student will contribute to a currently buzzing research area, and will be equipped with highly demanded expertise in Software-Defined Networks, and next generation network architectures.

Contact: email, web


Compilers and runtime systems for heterogeneous architectures, in particular FPGAs - Professor Wim Vanderbauwhede

The topic of this research is the development of a compiler for high-level programming of FPGAs in particular. However, the compiler will target OpenCL (e.g. using the recent SPIR draft for integration with LLVM) so that the code can also run on multicore CPUs and GPUs. The challenge lies in transforming the intermediate code to obtain optimal performance on the target platform by using all devices in the heterogeneous platform. This requires partitioning the code and transforming each part to create an optimal version for the device it will run on. This is a complex problem requiring the development of sophisticated cost models for the heterogeneous architecture as well as run-time systems that can dynamically schedule code to run on different devices. If you are keen to undertake cutting-edge compiler research, this is the topic for you!

Contact: emailweb


Acceleration of scientific code for clusters of multicore CPUs, GPUs and FPGAs - Professor Wim Vanderbauwhede

The topic of this research is the development and application of automated refactoring and source translation technologies to scientific codes, in particular climate/weather-related simulation code, with the aim of efficient acceleration of these codes on multicore CPUs, GPUs and FPGAs. If you are interested in source-to-source compilation (e.g. the ROSE compiler framework), refactoring and GPUs or FPGAs, and have expertise in compiler technology, FPGA programming or GPU programming, this topic provides an exciting research opportunity. The ultimate aim is to develop a compiler that can take single-threaded legacy code and transform it into high-performance parallel code using MPI, OpenMP, OpenACC or OpenCL, either entirely automatically or based on a small number of annotations.

Contact: emailweb


Acceleration of Information Retrieval algorithms on GPUs and FPGAs ("Greener Search") - Professor Wim Vanderbauwhede

The topic of this research is on accelerating search and data filtering algorithms using FPGAs and GPUs. In particular FPGAs have great potential for greening the data centres as they offer a very high performance-per-Watt. A lot depends on the actual algorithms, as well as the system partitioning. If you have expertise in FPGA programming would like to take part in the development of the next generation of low-power search technology, this is a great opportunity.

Contact: emailweb


A novel shared-memory overlay for HPC cluster systems - Professor Wim Vanderbauwhede

Traditional programming models have assumed a shared-memory model; however, modern architectures often have many different memory spaces, e.g. across heterogeneous devices, or across nodes in a cluster. Maintaining the shared-memory abstraction for the programmer is both very useful and highly challenging. In general of course naive shared-memory programming over an exascale HPC cluster would lead to disastrous performance.

However, in the context of a task-based programming model such as the GPRM (Glasgow Parallel Reduction Machine), we have shown that shared-memory parallel programming of manycore systems can be highly effective. The aim of this project is to extend the GPRM framework from homogeneous manycore systems to heterogeneous distributed systems through the addition of a shared-memory overlay. This overlay will allow the programmer to use a shared memory abstraction within the task-based programming model, and will leverage GPRM's sophisticated runtime systems and full-system knowledge to make intelligent caching decisions that will effectively hide the latency of the distributed memory space.

The implementation of this novel shared-memory overlay can be considered either in user space or as functionality in the operating system. The latter approach is more challenging but offers the greatest potential. If you are interested in research into system-level aspects of parallel programming for future HPC systems, this is an excellent choice.

Contact: emailweb


Application-defined Networking for HPC systems - Professor Wim Vanderbauwhede

Workloads in the area of climate modeling and numerical weather prediction are becoming increasingly irregular and dynamic. Numeric Weather Prediction models such as the Weather Research and Forecasting model already display poor scalabity as a result of their complex communication patterns. Climate models usually consist of four to five coupled models, and the communication between these models is highly irregular. Coupled models are a clear emerging trend, as individual models have become too complex for conventional integration. Combined with the growing scale of supercomputers and the ever-increasing computational needs (for example, for accurate cloud modeling in climate models), this trend poses a huge challenge in terms of performance scalability.

A lot of research and development effort has gone into optimizing the network hardware, the operating system network stack and the communication libraries, as well as optimization of the individual codes. Despite this, current supercomputer systems are not well equipped to deal efficiently with rapidly changing, irregular workloads, as the network is optimised statically, routing is static and there is no elastic bandwidth provisioning.

The aim of this PhD is to investigate a radically different approach to supercomputer networking, where the network is defined by the application in a highly dynamic and elastic way. This Application Defined Networking takes a holistic view of the complete system, from the application code down to the lowest level of the network layer. Decision about routing, prioritising traffic and bandwidth provisioning are taken using the information provided at run time by the application code. In this way, the network will adapt so that traffic will always be transferred in the optimal way (e.g. lowest possible latency or highest possible bandwidth).

As this is a very large topic, the actual PhD research project will likely focus on one or more specific aspects of the problem such as the machine learning algorithms required to predict the network behaviours, the inference of code annotations required for the application to notify the network of impending traffic, or the network subsystem required to handle the dynamic allocation of resources based on the application's needs.

Contact: emailweb


Software Engineering Practice – Tim Storer

I’m interested in research projects covering the practice of all aspects of contemporary software engineering, with the perspectives, needs and concerns of software practitioners being a focus.  We use a range of methods within the software engineering laboratory to study practice and practitioners, as well as design, build and evaluate new ways of working. This include ethnographic studies of practice (such as diary studies and physical observations), surveys and focus groups, lab studies of behaviour with novel tools, online observational studies across open source repositories and `in the wild` evaluations.

Current areas of interest include:

  • Software engineering for sustainability
  • Scientific software engineering
  • Agile requirements engineering
  • Software Effort Estimation
  • Software team coordination
  • Software engineering education
  • Cross framework and language maintenance and debugging
  • Simulating large scale complex socio-technical systems

However, this is not an exclusive list – please contact me to discuss software engineering practice research topics that are of interest.

Contact: emailweb.

__________________________________________________________________________________________________________

Future Programmable Networks and Services - Professor Dimitrios Pezaros

Active and programmable networks were a popular research area over 20 years ago but eventually faded due to the lack of adoption by the industry that was at the time focusing almost exclusively at increasing bandwidth capacity.

All this has now changed: resource virtualisation allows the efficient sharing of the physical infrastructure; and network operators and service providers now try to differentiate based on services they offer over virtualised infrastructures. In this new landscape, Software-Defined Networking (SDN) has emerged over the past decade as a new paradigm for dynamically-configured next generation networks, and has already been embraced by equipment vendors and service providers (e.g., Google, Facebook, etc.).

Fundamental to SDN has been the idea that a network’s entire control plane is logically centralised and abstracted from individual switches which are stripped from their legacy functionality to reduced complexity devices that are configured through an established API (e.g., OpenFlow). This mode of operation demonstrated the great potential of SDN but also highlighted shortcomings in real-time packet processing: the (simplified) SDN switches are not capable to make stateful, per-packet decisions at line-rate and hence to implement advanced services such as telemetry, intrusion and anomaly detection, and in-network compute acceleration. For this reason, over the past five years, a significant fraction of SDN research has been steered towards making the dataplane of individual switches itself programmable and independent of specific protocols like OpenFlow. This way, using domain-specific programming languages (e.g., P4), packet processing programs can be composed and execute on the switch, making the device itself programmable and able to support custom functionality at high-speeds.

So, research has been fragmented and focusing either on service orchestration through a network-wide control-plane or on the programmable dataplane of individual switches with a very limited and sometime device-local control plane, with relatively very few studies tackling both.

The overarching objective of this PhD project is to look at these two areas in synergy, and devise network-wide control plane(s) able to orchestrate individually-programmable and potentially diverse dataplanes for the development of advanced and high-speed services over heterogeneous networked infrastructures. The work will entail experimental research in protocols and languages for network programmability, in switch architectures, and the software-hardware interface. It will explore platform-independent language representations and runtimes (e.g., bytecodes and intermediate representations) that can allow custom pipelines on the switches without requiring the manual extension of protocol fields to support new functionality and at the same time offer bound data forwarding performance independent of bespoke hardware support. The work will also include the design of exemplar time-critical services that will benefit from such underlying network architecture.

The research will be conducted as part of the Networked Systems Research Laboratory (netlab) at the School of Computing Science and the student will be given access to a state-of-the-art SDN testbed with fully programmable platforms at all software and hardware layers. Through the strong experimental nature of this project, the student will contribute to a currently buzzing research area, and will be equipped with highly demanded expertise in Software-Defined Networks, and next generation network architectures.

Contact: email, web

__________________________________________________________________________________________________________

Human and Robot Teaming via Shared Control System - Dr. Emma Li


Recently, artificial intelligence has significantly advanced the robotics systems, where
humanoid robot can dance and vehicle can drive itself. However, there are still many tasks
that are very challenging for robots, in particular in complicated environments, for example,
fully autonomous driving in any conditions (level 5 autonomous driving). In this project, we
intend to address the challenge by exploiting human intelligence as well artificial intelligence
(AI) since many tasks are quite easy for human, but very difficult for AI.


The goal of this project is to develop the human and AI shared control system (human-inthe-
loop design), which can fully unlock the potentials of human and robot. This is an
interdisciplinary research project between computer science and engineering. The state-ofthe-
art Spot Robot from Boston Dynamics will be used in this project. The student is
expected to work with our industry and academic partners via internship.


This project suits students who are interested in human-AI shared control and robotics.

Contact: email

__________________________________________________________________________________________________________

Understanding and Preventing Robot impersonation attack - Dr. Emma Li


In the future, robots will be very advanced with high flexibility and accurate control
performance. They will have the ability to mimic human behaviour or even perform better.
This will cause a serious issue for the current security and authentication mechanisms, e.g.,
behavioural biometric authentication (BBA) if robots are used to mimic human behaviour and
attack the BBA for malicious purpose. In this project, we intend to understand the
possibilities of robot attacks on the current security and authentication mechanisms and
design machine learning methods to prevent the robot attacks.


In this project, the student will access our state-of-the-art robot testbed, including Franka
Emika and UR3e robotic arms. Computers and smart devices will be used in the project as
well. The robotic arms will be programmed and controlled to mimic human behaviour and
interact with the edge devices such as keyboard, mouse, and touch screen. The challenge
part is how to identify who is interacting with the devices, a robot or human.


This project suits students who are interested in human computer interaction, robotics, and
cybersecurity.

Contact: email

__________________________________________________________________________________________________________

Towards a More Resource-Efficient and Sustainable Distributed Computing - Dr Lauritz Thamsen

It is difficult to run data-intensive applications on today’s diverse and dynamic computing infrastructure, so that the applications provide the required performance and dependability, yet also run efficiently. There is abundant evidence of low resource utilization, limited energy efficiency, and severe failures with applications that back up this claim. Meanwhile, computing's environmental footprint already rivals aviation's and is projected to increase even further over the next years. The main objective of my work is therefore to support organizations and users in making efficient use of distributed computing infrastructures for their data-intensive applications.

Towards this goal, I work on new methods for a more data-driven and adaptive use of compute resources according to high-level user-defined objectives and constraints. Central to these methods are techniques that enable an effective modelling of the performance, dependability, and efficiency of data-intensive applications and, therefore, optimization. In addition, my research regularly also investigates techniques for resource-efficient monitoring, profiling, and experimentation.

Example directions for PhD research I will be happy to supervise include:

- “Right-sizing” and cluster resource management for large-scale distributed data processing (e.g., for big data systems, scientific workflows, and machine learning applications)

- Performance modelling and scheduling for low-carbon cloud computing (shifting batch processing jobs to times when and places where low-carbon energy is going to be available)

- Carbon-aware edge computing (admission, placement, and offloading of computing tasks based on the availability of renewable energy at the edge of networks)

- Resource-efficient IoT-sensor data processing (across the emerging environments of the IoT, which span from devices to intermediate edge/fog resources and cloud infrastructures)

Contact: Dr Lauritz Thamsen (https://lauritzthamsen.org/

__________________________________________________________________________________________________________

Programming Languages for Digital Twins - Dr B Archibald

While there are currently many tools for developing digital twins (DTs), there is limited research intro custom domain specific twin specification languages which would allow strong static and dynamic analysis techniques for DTs, while letting experts, e.g. in transport, clearly express their intent. Future twin systems will be federated, in the sense multiple digital twins will be interacting to produce useful information, and it is essential we can capture such systems within programming languages.

An interesting research topic is investigating novel domain specific languages, including visual languages [1], for digital twins. This may include determining formal ways to link existing languages, for example composing a twin for the energy grid with a twin for transport at the language level. Development of domain specific, safe-by-design [3] (e.g. type systems), DT languages that allow stronger analysis than general purpose tools [2]. New languages and techniques will be implemented and evaluated in real world applications utilising the transport expertise of the TransiT team.

[1] Blair Archibald, Min-Zheng Shieh & Yu-Hsuan Hu et al. (2020) BigraphTalk: Verified Design of IoT Applications, IEEE Internet Things J.

[2] Blair Archibald, Muffy Calder & Michele Sevegnani et al. (2022) Modelling and verifying BDI agents with bigraphs, Sci. Comput. Programming.

[3] Blair Archibald & Michele Sevegnani (2024) A Bigraphs Paper of Sorts, Springer, International Conference on Graph Transformation.

Contact: email

 

Glasgow Interactive SysTems (GIST)

Topics in the SIRIUS Lab - Dr Mohamed Khamis

The SIRIUS Lab team in University of Glasgow (https://mkhamis.github.io/mkhamis/team/) is looking for PhD students in the following areas. If you're interested in these topics, get in touch with mohamed.khamis@glasgow.ac.uk and share your CV, transcripts and details about which topics you're interested in and why.  Below are the topics:

 

Gaze-based interaction on handheld mobile devices (including multimodal interaction). Example papers:

 

Authentication in AR/VR. Example papers:

 

Physical safety in virtual reality. Example papers:

  • The Dark Side of Perceptual Manipulations in Virtual Reality. http://www.mkhamis.com/data/papers/tseng2022chi.pdf
  • Safety, Power Imbalances, Ethics and Proxy Sex: Surveying In-The-Wild Interactions Between VR Users and Bystanders. http://www.mkhamis.com/data/papers/ohagan2021ismar.pdf
  • We don't have a lot of work in this area yet, but there is big potential here to explore what kinds of physical dangers users can be subject to in VR. Covering dangers that come from the virtual experience and those that come from the real world would be interesting. For example, you can think of things like theft and unwanted physical interactions with people in the world, to virtual things like manipulating the user's perceptions to lead to physical danger in the real world. 

 

Child safety in VR. One thing to consider is to use physiological data to estimate if a user is subject to harassment in VR. Example papers:

 

Privacy in AR. Example papers:

Contact: web.

____________________________________________________________________________

Exploring Animal-Computer Interaction - Dr Ilyena Hirskyj-Douglas

As technologies diversify and become embedded in everyday lives, the computer that we expose to animals too, and the new technologies developed for them is increasing. While animals have also been using human-like computer technology for some time recently the field of Animal-Computer Interaction (ACI) has looked at the design and use of technologies with and for them. 

Taking insight from Human-Computer Interaction (HCI), this PhD project will be in ACI looking at developing novel internet-enabled and animal-controlled technologies for animals. The aim of this project is to 1) develop novel devices the animals can control themselves ethically and 2) look at how computers can support animals’ interactions and behaviours. This will be undertaken through a user-centric and mixed-method approach to build computer-enabled devices that are towards the animals’ affordances. The project will explore how animals can participate in computer systems as users and how technologies can be shaped around the animal to draw insights from their usage. 

This research will involve working with animals, in potentially various animal settings. The ideal candidate has a strong background in computing, especially in HCI with ideally some programming and interface design skills. No prior knowledge in animal science is required. 

Contact: web.

____________________________________________________________________________

Digital Technologies for Preventive Health in Everyday Lives - Dr Xianghua Ding


Preventive health, in terms of predication and intervention to help prevent the onset or worsening of diseases, is more cost-effective than the traditional treatment-focused approach. In particular, preventive health can substantially reduce the risk of chronic diseases, and is considered as the best solution to the growing crisis of chronic diseases. Traditionally, preventive health centers around people going to healthcare providers for regular health checkups and screening tests to see if any health issues are emerging and to catch them in time. While seemingly promising, affordability and accessibility remain primary barriers for vulnerable populations, causing increased disparities in healthcare.

Digital health technologies hold the potential to bring preventive health into everyday lives in a more accessible and affordable manner. In particular, the use of self-tracking, machine learning, and AI technologies can help assess, predict, and bring attention to an individual's health issues before the onset of a disease and encourage lifestyle changes as early interventions for effective disease prevention. We call this approach as preventive health in everyday lives, as distinct from traditional institution based preventive health.

This research is to answer the following questions for preventive health in everyday lives. How do one’s health, sub-health and illness statuses manifest themselves through our bodies and behavior? How can digital technologies, including sensing, tracking and AI technologies, assist us evaluate our health status before diseases occur or worsen? How can we enhance lay people’s health and technology literacy so they can effectively engage with these digital health technologies for preventive health? How should preventive health in everyday lives be integrated into broader healthcare systems?

These research questions will be answered through literature review, technological prototyping and empirical studies in the field or in the lab. The ideal candidate has a background in computing and is interested in combining theories and methods from social and medical sciences.

Contact: email

____________________________________________________________________________

Rethinking remote relationships in healthcare - Dr Stephen Lindsay

This work aims to rethink remote healthcare to fully realise it’s benefits while improving patient safety and trust. The COVID19 pandemic showed that remote relationships in healthcare can play a vital role in the delivery of primary and secondary care. However, many members of the public dislike and distrust the disjoint services offered through them1 and - although the move to remote services was seen as a welcome change by sone patient groups2 and the benefits of telehealth have been thoroughly explored in research – there were calls to heavily restrict or outright ban remote care’s use3.

In this PhD. we will explore new ways to engage in participatory design work with communities that do not care for remote healthcare, looking to remote Scottish and Australian communities for inspiration.  The PhD will lead to a thorough, human-centred understanding of the problems that remote work in healthcare causes and create novel prototypes showing what a better approach to video health consultations could look like.  

Useful Skills: Beneficial but not required to have experience working in healthcare settings or developing remote video calling applications.

Related Reading:

Greenhalgh, T., Wherton, J., Shaw, S. and Morrison, C., 2020. Video consultations for covid-19. Bmj, 368.

Kruse, C.S., Krowski, N., Rodriguez, B., Tran, L., Vela, J. and Brooks, M., 2017. Telehealth and patient satisfaction: a systematic review and narrative analysis. BMJ open, 7(8), p.e016242.

Monaghesh, E. and Hajizadeh, A., 2020. The role of telehealth during COVID-19 outbreak: a systematic review based on current evidence. BMC Public Health, 20(1), pp.1-9.

Footnotes

  1. Poverty of body language in Telehealth, The Guardian, https://tinyurl.com/a6e7y4kd
  2. Hundreds of groups ask governors to expand telehealth licensure flexibilities, Healthcare IT news, https://tinyurl.com/ytcsy4p4
  3. Mail on Sunday leads campaign to make GPs see all patients face to face once again, Daily Mail, https://tinyurl.com/vrhyhapp

Contact: email

 ____________________________________________________________________

Making with people living with cognitive impairments - Dr Stephen Lindsay

In this work, we will try to understand how to open the unique benefits of making your own bespoke technology to a community sorely in need of better designed technology – people living with cognitive impairments. Makerspaces and the maker movement set out to democratise access to the facilities to make digital technology, letting anyone turn into a designer creating their own unique, bespoke technical systems. This movement has had some success, with social initiatives encouraging education in disadvantaged groups and even in healthcare with physical disability can be addressed with 3D printed prosthetics1 and other assistive technologies2  

For people living with cognitive issues access to makerspace technology could have similar enormous benefit as well, but the process to access the spaces and suitability of the technology in them is not simple. This PhD will explore what maker technology looks like when it is designed to be used alongside people living with cognitive impairments and explore and how, when we allow excluded groups like these to build their own technology, how the things they build differ from more conventional tech.

Useful skills: Beneficial but not required to have familiarity with classic makerspace technology (3D printing, CnC machining, simple electronics, lo-fi prototyping kits such as Arduino or Raspberry Pi).

Related reading:

Ellis, K., Dao, E., Smith, O., Lindsay, S. and Olivier, P., 2021, May. TapeBlocks: A Making Toolkit for People Living with Intellectual Disabilities. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-12).

Hook, J., Verbaan, S., Durrant, A., Olivier, P. and Wright, P., 2014, June. A study of the challenges related to DIY assistive technology in the context of children with disabilities. In Proceedings of the 2014 conference on Designing interactive systems (pp. 597-606).

Slegers, K., Kouwenberg, K., Loučova, T. and Daniels, R., 2020, April. Makers in healthcare: The role of occupational therapists in the Design of Diy Assistive Technology. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-11).

Footnotes

  1. OpenBionics, Ultimaker, https://tinyurl.com/2p89ujdn
  2. Arduino Mod Lets Disbaled Musician Play Guitar, Makezine, https://tinyurl.com/2p8akzje

Contact: email

____________________________________________________________________________

Vision-based AI for animal behaviour understanding – Dr Marwa Mahmoud

Continuous monitoring of farm animals is a time-consuming process especially at times when 24/7 monitoring is required, such as the lambing season in sheep. Moreover, many diseases that affect animals are usually painful and cause distress. However, some animals, such as sheep, are prey species, they tend to not openly express signs of pain or weakness. The lack of human ability to recognise signs of pain on time is one of the most common causes of the untreated pain experienced by these animals, which is often associated with diseases.

The aim of this project is to explores how computer vision can be used for continuous monitoring of farm animals, especially sheep extending on previous work on sheep facial expressions analysis. The goal is to develop computer vision and machine learning models to automatically analyse sheep behaviour (including facial expressions, gait, animal-animal interaction, … etc) in order to build models for early predictions of significant events in the farm, such as early detection of diseases( e.g foot-rot and mastitis) as well as events that require interventions such as early detection of parturition and dystocia. The project will include working on video datasets collected over time and will explore how machine predictions compare with human predictions in terms of speed and accuracy. The project will also tackle computer vision challenges such as occlusions, image analysis in the wild, animal breed variations, ... etc. This work will involve collaborations with animal behaviour scientists.

The ideal candidate has a strong background in computer vision and machine learning.

Related Video: https://bit.ly/2YBqy34

Contact: email

____________________________________________________________________________

Vision-focussed digital biomarkers for mental health – Dr Marwa Mahmoud

There is a growing interest from healthcare organisations, academia and industry on automatic prediction, prevention, and intervention of mental health issues, but most of the current work depends on non-visual input, such as wearables and mobile phone data. Some work on multimodal behaviour understanding has explored combing signals from text, audio and vision focussing on the facial features and mainly tackling one specific illness.

The aim of this project is to develop novel behavioural modelling techniques that exploit the full analysis of the visual signal from facial expressions and body expressions as well as exploring multimodal fusion techniques that are based on modelling uncertainty in the data.

The project will aim to devise data-driven probabilistic inference models that accounts for common co-morbidity and correlations between different mental health disorders. We also aim to explore automatic detection of novel non-verbal cues especially from the body expressions that has not been fully considered before in visual analysis of human behaviour, increasing the repertoire of automatically detected non-verbal cues related to mental health.

Related publications:

  1. Lin, I. Orton, Q. Li, G. Pavarini, M. Mahmoud, “Looking At The Body: Automatic Analysis of Body Gestures and Self-Adaptors in Psychological Distress”, IEEE Transaction on Affective Computing, 2021.
  2. Ceccarelli and M. Mahmoud, “Multimodal temporal machine learning for Bipolar Disorder and Depression Recognition”, in Pattern Analysis and Applications Special Issue on Computer Vision and Machine Learning for Healthcare Applications, June 2021

Contact: email


Future of Immersive Technologies - Dr Julie Williamson

Immersive technologies are moving towards always-on ubiquitous devices that give users control over a dynamic reality.  This is a transformative shift in how we interact with technology by changing our very experience of reality, augmenting our capabilities, and opening up new ecosystems of human activity.  With the growing availability of consumer XR devices like Oculus Quest, NReal Light, and Hololens 2, fundamental research in human computer interaction in uniquely situated to transform these hardware advances into societal impacts.

 

PhD projects in immersive technologies can address any combination of the following challenge themes:

 

Dynamic XR: For XR to become usable in real world settings, we must develop new interaction metaphors and techniques to allow users to move dynamically along the reality-virtuality continuum.  The key challenges are breaking past existing interaction techniques to propose new input techniques, evaluating new input techniques in real world settings, and establishing interaction metaphors that work across the reality-virtuality continuum.

 

Mediated Perception: XR creates a fundamentally different relationship between people and how we experience reality, witness shared events, and anchor our experiences.  By joining research from HCI and philosophy, we can uncover new understandings of how XR acts on our perceptions and memories.  The key challenges are understanding how digital content is woven into memories, how we experience transitions between degrees of reality-virtuality, and how we make judgements about reality and anchor experiences.

 

Responsible Immersion: Immersive technologies present unprecedented challenges for ethics, equity, and society.  In a possible future where virtual content is indistinguishable from physical content, there are significant challenges around the integrity of human experience, consent when perceptions may be altered, and the further fragmentation of society through individualised realities.  The key challenges are identifying fundamental issues for responsible immersion, predicting possible dark patterns, and developing concrete solutions to promote responsible immersion at all levels of research and development.

Contact: emailweb


In-car haptics - Professor Stephen Brewster

The physical controls in cars are being replaced by touchscreens. Pressing physical buttons and turning dials are being replaced by tapping on touchscreens and performing gestures. These work well on smartphones but can be problematic in cars. For example, they require more visual attention, taking the driver’s eyes off the road; they are smooth and flat, so it is hard to find where to interact.

The aim in this PhD project will be to use haptics (touch) to improve in-car interaction. We can then add back in some of the missing feedback and create new interactions that take advantage of the touchscreen but without the problems. We will investigate solutions such as pressure-based input – by detecting how hard a person is pressing, we can allow richer input on the steering wheel or other controls. Free-hand gesture input can allow the driver to control car systems without reaching for controls. For output, we will study solutions such as tactile feedback, ultrasound haptics and thermal displays to create a rich set of different ways of providing feedback. We will also combine these with audio and visual displays to create effective multimodal interactions that allow fast and accurate interaction from the driver but don’t distract him/her from driving. We will test these in the lab in a driving simulator and on the road to ensure our new interactions are usable.

Contact: emailweb


Artificial Intelligence for Psychiatry and Mental Health - Professor Alessandro Vinciarelli

The goal of the project is to develop automatic approaches for the detection of psychiatric issues in children and adults. In particular, the project will use methodologies inspired by Social Signal Processing - the AI domain aimed at modelling, analysis and synthesis of nonverbal behaviour - to detect automatically the behavioural cues associated to the presence of common psychiatric issues such as depression and non-secure attachment.

The project will be highly interdisciplinary and will contribute to both computing science (making computers capable to analyse the behaviour of people involved in clinical interviews) and psychiatry (identifying the behavioural traces of psychiatric issues). Furthermore, the project will involve extensive experiments revolving around the interaction between humans and interactive systems aimed at delivering psychiatric tests. The ideal candidate has a solid computing background in computing, in particular machine learning and artificial intelligence, but it is open to collaborate with colleagues active in human sciences (social psychology, anthropology, etc.).

Contact: emailweb

____________________________________________________________________________

Privacy-Preserving Emotion Recognition - Dr Tanaya Guha 

Off-the-shelf machine learning models are increasingly being used to detect a person’s facial and/or verbal expressions, which are often linked to their mental state or emotion. Such models are used to monitor mental health, to develop virtual conversational agents and even for games and entertainment. Such models, though intended for detecting expressions, may leak sensitive demographic information. The project objective is to build privacy-aware emotion recognition models (visual or speech or both) that preserve the emotional content while obfuscating any sensitive information (e.g., gender) as identified by the user.

Desirable skills: Python, Machine Learning, Linear Algebra, Probability

Contact: emailweb

____________________________________________________________________________

Human-in-the-loop AI for end-users - Dr Simone Stumpf

 

It is increasingly important to design and develop human-in-the-loop AI systems which underpin responsible and trustworthy decision-making especially in high-stakes domains. To date, much of this work has been targeted at involving data scientists and machine learning experts. This project will focus instead on lay users and domain experts without a background in machine learning.

 

The PhD research will explore how end-user feedback can be integrated into AI systems to improve the decision-making. As a necessary by-product, this research will investigate how to communicate the decision-making model and predictions to the end-user through explainable AI (XAI), in order to improve the interpretability of the system and consequently engender appropriate trust. Possible domains include public, financial or healthcare decision-making. This research is at the intersection of HCI and AI, and will involve developing machine learning systems which are rigorously evaluated through user studies.

 Contact: emailweb

____________________________________________________________________________

Next Generation Sensory Earables for Digital Phenotyping – Professor Fahim Kawsar

Earables are now pervasive, and their established purpose, ergonomics, and non-invasive interaction uncover exciting opportunities for personal sensing applications. Imagine a workplace where your earable understands you and tells you exactly why you feel and how you feel. Imagine if the device can tell you when your brain is overtaxed or gives you the power to regulate your emotion or alter your soundscape for meaningful communication. What if it can save you from fatal occupational accidents or stop you from information overload to better assist you in making the right business decisions.

 

In recent years, sensory research in and around the ear has sharply risen as earables can transform how we perceive and experience sound. However, earable can also revolutionise personal health and clinical research by enabling non-invasive, continuous, and accurate health monitoring and behaviour sensing. Due to its proximity to other vital organs, i.e., brain and eyes, earables can be used to monitor a plethora of biomarkers. Although this proposition is remarkable, significant research challenges still exist concerning the accuracy, reliability, and validation of the data generated by this technology. 

This PhD project aims to systematically study various foundational aspects of designing next-generation sensory earables for digital phenotyping. Principally, we want to devise 1) noise invariant and power-efficient multi-modal signal processing pipelines to extract various biomarkers and 2) compute-efficient and privacy-preserving learning techniques to model higher-order health and behavioural outcomes. We expect the efficacy of these techniques to be validated in ecologically valid settings beyond a controlled lab environment. This project is suitable for students interested in sensory signal processing and applied machine learning on embedded devices.

 Contact: emailweb

____________________________________________________________________________

Communication skills training and assessment with virtual social interactions and social signal processing - Dr Mathieu Chollet

Communication skills are essential in many situations and have been identified as core skills of the 21st century. Technological innovations have enabled to create training applications which have demonstrated great potential: social situations can be simulated with virtual agents in order to enact a variety of training activities; trainees’ behaviours can be automatically tracked and analysed with social signal processing methodologies (AI methods applied to modelling and analysing social behaviour), in order to automatically assess the trainees’ performance or confidence and subsequently generate personalised feedback.

The project will aim at developing new approaches for the assessment and training of communication skills leveraging the benefits of virtual social interactions and social signal processing. In particular, the different components of communication skills training methods and applications (virtual social interactions, automated feedback, gamification) will be assessed with respect to their relative benefits on trainees’ performance, self-efficacy, and user experience. The project will be highly interdisciplinary and will feature a strong experimental focus, involving the development of training prototypes and their evaluation in experimental studies. The ideal candidate will have a strong interest in interdisciplinary research and should be open to collaborate with colleagues from psychology and the learning sciences, and should demonstrate a solid background in computing, in particular artificial intelligence and/or human-computer interaction.

 Contact: email

 

Information, Data and Analytics (IDA): information, data, events, analytics at scale

Dr Debasis Ganguly

Topic: Large language models (LLMs), when scaled from millions to billions of parameters, have been demonstrated to exhibit the so-called `emergence' effect, in that they are not only able to produce semantically correct and coherent text, but are also able to adapt themselves surprisingly well with small changes in contexts supplied as inputs (commonly called prompts).

Despite producing semantically coherent and potentially relevant text for a given context, LLMs are vulnerable to yield incorrect information. This misinformation generation, or the so-called hallucination problem of an LLM, gets worse when an adversary manipulates the prompts to their own advantage, e.g., generating false propaganda to disrupt communal harmony, generating false information to trap consumers with target consumables etc. Not only does the consumption of an LLM-generated hallucinated content by humans pose societal threats, such misinformation, when used as prompts, may lead to detrimental effects for in-context learning (also known as few-shot prompt learning). 

In light of the above-mentioned problems, this PhD will focus on ideas related to not only identifying misinformation from LLM-generated content, but also to mitigate the propagation effects of this generated misinformation on downstream predictive tasks thus leading to more robust and effective leveraging in-context learning. Another direction would be to investigate optimisation of in-context learning via Large Language Models (LLM). It has become a common practice to tune prompts, rather via an ad-hoc manner for a particular task. However, this can be done by approaching the problem from a discrete state space optimisation perspective, and it's possible to apply a reinforcement learning to solve this. The effect would be to learn task-specific in-context (few-shot) prompts to optimise an LLM's performance for a particular task in a systematic manner.

Required skills: The ideal candidate will have a strong background in Computer Science and some background in either Statistics. In particular, the student is expected to have strong programming skills, some prior experience of machine learning, a good command of English and team work skills. Prior experience of research in machine learning (specifically deep learning with neural models) and publishing papers will be preferable.

Eligibility: Full funding is provided for EU/UK students (standard EU/UK fees and stipend rates included). Non-EU/UK students can apply, however they have higher international fees, which will not be fully covered by the scholarship.

Contact: email


RESILIENT LEARNING IN EDGE COMPUTING- Dr Christos Anagnostopoulos

Research fields: Resilience in Distributed ML systems; Distributed Decision Making; Distributed AI. 

Description: In distributed computing environments, the collaboration of nodes for predictive analytics at the network edge plays a crucial role in supporting real-time services. When a node’s service turns unavailable for various reasons (e.g., service updates, node maintenance, or even node failure), the rest available nodes could not efficiently replace its service due to different data and predictive model (e.g., Machine Learning (ML) models). This research is based on building and maintaining the systems’ resilience to node’s service unavailability/failure and avoiding interruptions to their predictive services. 

Enrolment & opportunity: The successful candidate will enrol as a PhD student at the School of Computing Science under the supervision of Dr Christos Anagnostopoulos will join the Knowledge & Data Engineering Systems (KDES) Group. Our lab explores several different issues such as: distributed ML, statistical learning, scalable & adaptive information processing, and data processing algorithms. 

Skills: The ideal candidate will have a background in computer science and background in either mathematics and/or statistics. special areas of interest include mathematical modelling/optimization. a good understanding of the basic machine learning and adaptation algorithms as well as an MSc in one of the above areas will be a considerable plus. programming skills (Python/MATLAB/Java), good command of English and teamwork capacity are required. 

Contact: emailweb


MODEL REUSABILITY AT THE EDGE- Dr Christos Anagnostopoulos

Research fields: Model re-usability at the Edge; multi-task & Federated Learning at the edge. 

Description: To cope with the challenge of managing numerous computing devices, humongous data volumes and models in Internet-of-Things environments, Edge Computing (EC) has emerged to serve latency-sensitive and compute-intensive applications. Although EC paradigm significantly eliminates latency for predictive analytics tasks by deploying computation on edge nodes’ vicinity, the large scale of EC infrastructure still has huge inescapable burdens on the required resources. This research focuses on novel paradigms where edge nodes effectively reuse local completed computations (e.g., trained models) at the network edge. 

Enrolment & opportunity: The successful candidate will enrol as a PhD student at the School of Computing Science under the supervision of Dr Christos Anagnostopoulos will join the Knowledge & Data Engineering Systems (KDES) Group. Our lab explores several different issues such as: distributed ML, statistical learning, scalable & adaptive information processing, and data processing algorithms. 

Skills: The ideal candidate will have a background in computer science and background in either mathematics and/or statistics. special areas of interest include mathematical modelling/optimization. a good understanding of the basic machine learning and adaptation algorithms as well as an MSc in one of the above areas will be a considerable plus. programming skills (python/MATLAB/java), good command of English and teamwork capacity are required. 

Contact: emailweb


DISTRBITED AI AT THE EDGE - Dr Christos Anagnostopoulos

Research Fields: Distributed inference; Distributed DL/ML; Personalized Federated Learning. 

Description: The main aim of this PhD research is the intelligent management of distributed data. The focus will be in the management of heterogeneous streams of dynamically changing data and the provision of distributed AI techniques that will build knowledge over multiple streams. The study involves knowledge discovery fully adapted to the application domain and the underlying infrastructure. Novel techniques for distribution reasoning and adaptation, model inconsistency checking, distributed inference, distributed time series correlation and decentralized concept drift identification will be proposed and evaluated. The implementation process will adopt widely known frameworks for supporting edge computing environments. 

Enrolment & opportunity: The successful candidate will enrol as a PhD student at the School of Computing Science under the supervision of Dr Christos Anagnostopoulos will join the Knowledge & Data Engineering Systems (KDES) Group. Our lab explores several different issues such as: distributed ML, statistical learning, scalable & adaptive information processing, and data processing algorithms. 

Skills: The ideal candidate will have a background in computer science and background in either mathematics and/or statistics. special areas of interest include mathematical modelling/optimization. a good understanding of the basic machine learning and adaptation algorithms as well as an MSc in one of the above areas will be a considerable plus. programming skills (Python/MATLAB/Java), good command of English and teamwork capacity are required. 

Contact: emailweb

Information, Data and Analytics (IDA): inference, dynamics and interaction

Casual Interaction: creating novel styles of human-computer interaction that span a range of engagement levels - Professor Roderick Murray-Smith

The focused–casual continuum is a framework for describing interaction techniques according to the degree to which they allow users to adapt how much attention and effort they choose to invest in an interaction conditioned on their current situation. Casual interactions are particularly appropriate in scenarios where full engagement with devices is frowned upon socially, is unsafe, physically challenging or too mentally taxing.

This thesis will involve the design of novel interaction approaches which use a range of novel sensing and feedback mechanisms to go beyond direct touch and enable wider use of casual interactions. This will include ‘around device’ interactions, ‘Internet of things’ style interaction with specific objects. Making these systems work will require the development of systems based on signal processing and machine learning.

To test the performance of the system we will use motion capture systems and biomechanical models of the human body, wearable eye trackers and models of attention to infer the mental and physical effort required to interact with the system.

Related work:

  • H. Pohl, R. Murray-Smith, Focused and Casual Interactions: Allowing Users to Vary Their Level of Engagement ACM SIG CHI 2013 pdf
  • The BeoMoment from B&O is an example of Casual interaction which was designed together with our group in a recent Ph.D. thesis - Boland, Daniel (2015) Engaging with music retrieval.

Contact: emailweb

____________________________________________________________________________

 

Compositionality and relationality in deep learning - Dr Paul Henderson

This project will develop new techniques for incorporating symbolic ideas -- such as object attributes, bindings and relations -- into deep learning models. While transformers and diffusion models have achieved impressive performance on generative learning tasks due to their powerful inductive biases, they still fail to systematically generalise to complex combinations of concepts, and to bind adjectives and nouns reliably in complex situations. We hypothesise this is because they lack sufficient expressivity to internally represent a complete scene or state-of-the-world -- composed of a set of objects, their attributes, and relations among them -- and to reason over this. We hypothesise that this lack of expressivity also contributes to the need for extremely large amounts of training data in models that do achieve (limited degrees of) compositional generalisation. In this project, we shall therefore investigate adaptions of transformers and other deep learning models, to allow them to represent complex symbolic structures more effectively, while still being driven by end-to-end unsupervised training objectives such as maximum-likelihood.

Contact: email

____________________________________________________________________________

 

 

SEMI-SUPERVISED MEDICAL IMAGE SEGMENTATION AND REGISTRATION - Dr. Fani Deligianni 

Research fields: Semi-supervised learning; Medical image analysis; Deep learning

Description: Medical image segmentation and registration are crucial tasks in healthcare diagnostics and treatment planning. However, these tasks often require large amounts of labelled data, which can be expensive and time-consuming to obtain. To address this challenge, this research focuses on developing semi-supervised learning techniques for medical image segmentation and registration that can effectively leverage both labelled and unlabelled data. By combining the strengths of supervised and unsupervised learning, we aim to reduce the dependence on large, annotated datasets while maintaining high accuracy.

Enrolment & opportunity: The successful candidate will enrol as a PhD student at the School of Computing Science under the supervision of Dr. Fani Deligianni and will join the Biomedical AI and Imaging Informatics Group. Our lab explores several research problems in semi-supervised learning, medical image analysis and deep learning. The candidate will have the opportunity to work on cutting-edge research at the intersection of machine learning and medical imaging with potential real-world applications in healthcare.

Skills: The ideal candidate will have a strong background in computer science, with a focus on machine learning and image processing. A solid foundation in mathematics and/or statistics is important. Special areas of interest include deep learning architectures for image analysis, semi-supervised learning techniques, and optimization methods. A good understanding of medical imaging modalities (e.g., MRI, CT, ultrasound) will be a considerable plus. Strong programming skills (Python, PyTorch/TensorFlow) are required. Experience in image processing libraries (e.g., OpenCV, SimpleITK) is desirable. Good communication skills in English and teamwork capacity are essential for collaborating with interdisciplinary teams and disseminate research findings.

Contact: email, website

 ____________________________________________________________________________

 

PRIVACY-PRESERVING HUMAN MOTION ANALYSIS USING WIFI DATA - Dr. Fani Deligianni

Research fields: Privacy-preserving machine learning; Human motion analysis; WiFi sensing; Deep learning

Description: Human motion analysis via WiFi sensing offers a non-intrusive method for monitoring activities in various settings, from healthcare to smart homes. However, the collection and analysis of such data raise significant privacy concerns, as WiFi signals can potentially reveal sensitive information about individuals' behaviours and routines. This research focuses on developing privacy-preserving techniques for human motion analysis using WiFi data, ensuring that valuable insights can be extracted without compromising individual privacy.

The project will explore advanced privacy-preserving machine learning techniques, such as differential privacy and knowledge distillation adapted specifically for WiFi-based motion analysis. By developing methods that can learn from distributed data sources without centralizing raw data, we aim to enable robust motion analysis while keeping personal data locally stored. Additionally, we will investigate techniques to minimize the risk of inverse attacks and ensure that the learned models do not inadvertently leak sensitive information.

Enrolment & opportunity: The successful candidate will enrol as a PhD student at the School of Computing Science under the supervision of Dr. Fani Deligianni and will join the Biomedical AI and Imaging Informatics Group. Our lab explores cutting-edge research problems at the intersection of privacy-preserving machine learning and human motion analysis. The candidate will have the opportunity to work on innovative solutions that balance the utility of motion analysis with stringent privacy requirements, potentially impacting fields such as healthcare monitoring, smart home technologies, and privacy-aware ambient intelligence.

Skills: The ideal candidate will have a strong background in computer science or electrical engineering, with a focus on machine learning and/or signal processing. A solid foundation in mathematics and/or information theory is important. Special areas of interest include privacy-preserving machine learning techniques (e.g. differential privacy) and deep learning architectures for time-series data, and WiFi signal processing. Familiarity with human motion analysis or activity recognition will be a considerable plus.

Strong programming skills (Python, PyTorch/TensorFlow) are required. Experience with privacy-preserving libraries (e.g., Opacus) and signal processing tools is highly desirable. Excellent communication skills in English and the ability to work in interdisciplinary teams are essential for collaborating on complex privacy-utility trade-offs and disseminating research findings to both technical and non-technical audiences.

Contact: email, website

____________________________________________________________________________

 

AI-DRIVEN CLINICAL DECISION SUPPORT SYSTEMS USING ECG AND MULTI-MODAL DATA - Dr. Fani Deligianni

Research fields: Clinical decision support systems; Multi-modal machine learning; Electrocardiogram (ECG) analysis; Explainable AI, Deep learning

Description: Clinical decision support systems (CDSS) have the potential to significantly improve patient outcomes by assisting healthcare professionals in making timely and accurate diagnoses. This research focuses on developing advanced AI-driven CDSS that leverage electrocardiogram (ECG) data alongside other multi-modal clinical information to enhance diagnostic accuracy and disease prevention.

The project aims to create novel deep learning architectures capable of integrating and analyzing diverse data types, including ECG signals, patient demographics, laboratory results, and medical imaging. By fusing these multi-modal inputs, we seek to capture a more comprehensive view of a patient's condition, leading to more accurate and personalized diagnoses. A key challenge lies in effectively combining these heterogeneous data sources while maintaining interpretability of the AI's decision-making process.

Furthermore, this research will explore the development of explainable AI techniques specifically tailored for clinical applications. This focus on interpretability is crucial for building trust among healthcare professionals and ensuring that the CDSS can provide not just predictions, but also clear, actionable insights into the underlying factors driving those predictions.

Enrolment & opportunity: The successful candidate will enrol as a PhD student at the School of Computing Science under the supervision of Dr. Fani Deligianni and will join the Biomedical AI and Imaging Informatics Group. Our lab investigates cutting-edge problems in AI for healthcare applications, including ethical considerations and explainable AI. The candidate will have the opportunity to work with multidisciplinary teams of both computing sciences and clinicians, as well as to develop innovative solutions that have the potential to directly impact patient care and clinical practice.

Skills: The ideal candidate will have a strong background in computer science or biomedical engineering, with a focus on machine learning and signal processing. A solid foundation in mathematics and statistics is essential. Special areas of interest include deep learning architectures for time-series and multi-modal data and explainable AI techniques. Familiarity with ECG interpretation and general medical knowledge will be a considerable plus.

Strong programming skills (Python, PyTorch/TensorFlow) are required. Experience with biomedical signal processing libraries is highly desirable. Knowledge of clinical workflows and regulatory requirements for medical AI systems would be beneficial. Excellent communication skills in English and the ability to work in interdisciplinary teams are essential for collaborating with healthcare professionals and translating complex technical concepts for clinical audiences.

Contact: email, website

 

Information, Data and Analytics (IDA): information retrieval

Extracting biomedical knowledge at scale - Dr Jake Lever

Description: Most biomedical knowledge is locked in the text of research articles and textbooks and not easily accessible for researchers or machine learning systems that can make use of it. Natural language processing (NLP) methods should be used to extract knowledge from these sources and convert it into structured data that can be easily managed and searched. However, the scale of the literature and the constant flood of new papers present several computational challenges at each stage of this complex machine learning pipeline. The broad aims of this PhD will be to examine how newer computationally expensive deep learning models can be applied cleverly at scale and build approaches to intelligently combine the extracted knowledge. This will involve novel Named Entity Recognition (NER) methods to accurately extract mentions of biomedical entities and n-ary relation extraction methods to identify the complex important relationships described such as treatments and side effects.

Skills: The ideal candidate would have strong skills in machine learning with preferably some knowledge of natural language processing, deep learning or information extraction. A background in biology or medicine is not required at all, but an interest in that area would certainly help.

Eligibility: Full funding is available for UK (home) students. Funding for international students is available though more competitive.

Contact Information: For further information, interested candidates can contact Jake Lever 

_________________________________________________________________________________

Renting the Right Room: Improving Airbnb Recommendation with Deep learning - Dr Richard McCreadie and Professor Iadh Ounis

Description: The Glasgow Information Retrieval group is looking for motivated students interested in our doctoral program. In particular, in collaboration with the Adam Smith Business School, we are looking for a PhD student to work on retrieval challenges in the emerging ‘shared economy’ (e.g. online room renting/sharing websites). A successful student taking this opportunity will be provided access to Big Data from Airbnb, including sales information and dates, property descriptions and images, as well as property reviews.

The broad aim of this PhD programme is to examine how to use and extend state-of-the-art machine learning and artificial intelligence algorithms (e.g. new neural network architectures) to better satisfy tenants on platforms like Airbnb. This will involve learning about how such platforms recommend properties to people, analysing the influential factors that lead to a good customer experience, and ultimately designing new approaches that produce better recommendations than current solutions.

Environment: The successful candidate will enrol as a PhD student at the School of Computing Science (Information, Data, Analysis Section), University of Glasgow, under the supervision of Prof Iadh Ounis & Dr Richard McCreadie, and will be co-supervised by Dr Bowei Chen from the Adam Smith Business School. The successful candidate will be based in the Glasgow Information Retrieval Group, and will be expected to collaborate with experts in Big Data processing, Machine Learning from across the IDA Section. The successful candidate will have access to a state-of-the-art cluster of machines, including new GPU servers.

Skills: The ideal candidate will have a strong background in Computer Science and some background in either Statistics. In particular, the student is expected to have strong programming skills, some prior experience of machine learning, a good command of English and team work skills.

Eligibility: Full funding is provided for EU/UK students (standard EU/UK fees and stipend rates included). Non-EU/UK students can apply, however they have higher international fees, which will not be fully covered by the scholarship.

Contact Information: For further information, interested candidates can contact Richard McCreadie (richard.mccreadie@glasgow.ac.uk) or Iadh Ounis (iadh.ounis@glasgow.ac.uk)

________________________________________________________________________________

Enhancing Sustainability via Dynamic Optimisation of Barclay’s Data Services- Dr Richard McCreadie and Dr Jeremy Singer

Barclays currently operates a complex network of data services, comprising a range of large heterogenous databases, key-value stores, object stores and more. Indeed, one example data store holds over 6TB of data over 75 collections, with ingestion and consumption 24/7. Data is also queried around 10k times a day using over 25 APIs, which require a response within 600ms to meet service level agreements. However, this data is stored in a way that does not lend itself to analytics use cases, resulting in frequent data duplication, as there is then a need to create additional data stores catered to each analytics and data visualisation task. As a result, there are a range of opportunities to optimise Barclays data infrastructure to enable more efficient analytics and operational use cases, thereby reducing Barclay’s data footprint and hence the energy required to store and process that data. With an ever-increasing amount of data and use cases resulting from machine learning, this is paramount to ensure technology stack sustainability.

One key advantage that Barclays enjoys is availability of deep logs regarding the usage of this data infrastructure. Further, Barclays keeps integration patterns to understand the use cases of the data, allowing for the identification and modelling of both analytics and operational use cases. Combined, these unique data points have the potential to be used to model optimised data storage solutions, enabling 1) reduced data duplication; and 2) next generation data structures that are suitable for both analytics and operational needs, which to our knowledge do not exist currently.

The core topic of the PhD is investigating ways to smartly use this existing log data regarding data usage within Barclays to optimise their data infrastructure. It is anticipated that the PhD will in broad strokes involve:

  • Analysis of Barclays log data with the goal of identifying key inefficiencies currently present.
  • Developing machine learned models trained on this log data with the goal of predicting future data access loads.
  • Using these learned models to trigger on-the-fly optimizations of the infrastructure (e.g. scaling up-to-down database replication) such that data needs are met with as few resources as possible.

 

This PhD would be suitable for a student interested in machine learning, financial data, data storage solutions, and infrastructure orchestration (using containers).

How to Apply:  Please refer to the following website for details on how to apply:

http://www.gla.ac.uk/research/opportunities/howtoapplyforaresearchdegree/.

Information, Data and Analytics (IDA): computer vision and autonomous systems

Bayesian Deep Atlases for Cardiac Motion Abnormality Assessment by Integrating Imaging and Metadata - Dr Ali Gooya

Cardiovascular diseases (CVDs) are the second biggest killer in the UK; currently, more than 7 million people live with CVD in the country. Early identification of individuals with significant risk is critical to improve the patient quality of life and reduce the financial burden on the social and healthcare systems. Many CVDs lead to a shortage of blood supply to the heart muscle, and abnormal motion is diagnosed non-invasively by analysing the patient's dynamic cardiac imaging data. Manual assessment of these images is subjective, non-reproducible, limited to the left ventricle, and time-consuming. Statistical atlases describing the 'average' pattern of the heart motion over a sizeable healthy population can help identify deviations from normality in individuals. However, the integration of the existing atlases into clinical practice is inhibited by three fundamental limitations: (i) the derived motion statistics are often independent of the patient's age, gender, weight, etc. (metadata) that are essential for precise diagnosis, (ii) the detected abnormalities due to failure of heart segmentation could not be disentangled from the underlying clinical conditions.

To alleviate these fundamental limitations, this proposal aims, for the first time, to develop a complete probabilistic atlas to evaluate bi-ventricular motion abnormalities accurately.

  • We are holistically integrating imaging and metadata from a large population cardiac imaging study.
  • Disentangling the algorithmic segmentation failures from underlying clinical conditions
  • Addressing the computational challenge of extending deep transformer models to motion data

The framework will be a novel Bayesian approach extending the recent developments in deep recurrent neural networks (e.g. Vision Transformers). These networks provide a natural mechanism to model sequential data such as 2D video. Yet, using Transformers to model the complex dynamics of the heart motion is conceptually new and powerful. The motion will be modelled as the spatiotemporal (3D+t) sequence of the heart shapes across the cardiac cycle, extracted from cine Cardiac Magnetic Resonance (CMR) images. The atlas will be a recurrent model that, given a sequence, will predict a probabilistic distribution function (pdf) for the following heart status. The critical aspect is that the pdf will be conditioned on the patient's metadata (age, gender, ethnicity, etc.). Thus by measuring the spatial deviations from the expected shape at each phase, the atlas will allow very accurate quantification of anatomical and functional cardiac abnormalities (and variances showing uncertainties) specific to the patients.

We have extensive experience developing Bayesian and non-Gaussian statistical atlases from cardiac shapes and motion. However, the previous work discarded the patient metadata (such as age, gender, ethnicity, etc.). Therefore, the atlas was not clinically deployable to study cardiac motion abnormalities, which are relevant to various CVDs. 
The atlas will be derived from the UK Biobank CMR study aiming to scan n>100,000 patients by 2022. The training of the atlas will be pursued as the new releases of the data sets from the UK Biobank becomes available. We have established collaboration with this study's clinical advisor and have full access to the CMR data sets. 

Contact: e-mail

_________________________________________________________________________________

Physics Informed Neural Networks for Prediction of Recurrence in Brain Tumours - Dr Ali Gooya

Glioma is a class of brain tumours primarily originating from glial cells in the brain. The most common malignant form of glioma, with the highest mortality rate, is Glioblastoma (GBM).

Even after tumour resection, GBM remains highly recurrent. A key reason behind that is the infiltration of the cancerous cells into the tumour periphery, quantities of diffused malignant cells that are not detectable using conventional multi-modal MRI techniques due to their limited sensitivity. Hence, to reduce the recurrence, the common practice is to include an additional margin of two centimetres of healthy-looking brain tissue when dissecting the tumour. Chemotherapy and delivery of radiotherapeutic doses in the tumour's peripheral area typically follow this. Unfortunately, despite significant advances in recent years, the medical life expectancy is only about 12 months and advancing GBM patient treatment techniques is an unmet clinical need.

Many researchers have tried to predict brain areas prone to recurrence using tumour growth models - partial differential equations (PDE) that describe a probabilistic time-space distribution of the tumour cells beyond the detection sensitivity of MRI scanners. Given a set of MRI brain tumour images, these models are first personalised, i.e. the diffusion/proliferation coefficients and the initial tumour distribution are inferred from the observed MRI images. The latter is a challenging ill-posed inversion problem that involves solving forward and Kolmogorov backward PDEs in high-dimensional spaces, which are NP-hard and computationally expensive.  Even after a successful inversion, the correlation between the discovered tumour distribution at future time points, the delivered post-resection radiation doses, and the appearance of the MRI features remain largely unexplored.

In this project, we will leverage physics-informed neural networks (PINN) that use multi-parametric brain tumour MRI and radiation dose images to predict the brain areas of tumour recurrence. The network will essentially use an encoder to discover the PDE parameters and utilise a decoder that approximates the solution of the diffusion-reaction model (PINN-VAEs). In addition, we will explore the potential of Feyman-Kac Brownian motion dynamics-based methods. The proposed PINN's critical advantage is drastically reducing the dependency on the training data. This Achilles’ heel has held back applications of deep learning techniques when patient data is scarce. Furthermore, the proposed research may improve the GBM patient survival rate by identifying the recurrent tumour areas and thus informing the radiotherapeutic treatments.

Contact: e-mial

_________________________________________________________________________________

Visual Inspection of Deformable Objects- Dr Gerardo Aragon Camarasa

Our 3D vision technology has been demonstrated to resolve visual features to submillimetre accuracy (see https://eprints.gla.ac.uk/104927/1/104927.pdf). This technology has been adopted as the underlying visual perception system for the robotic manipulation of garments, and our system can produce highly-detailed 3D reconstructions in 100ms. Examples, where our 3D vision technology has been used for robotic garment manipulation, include:

This PhD aims to develop a specific solution for the visual inspection of defects of deformable objects. For this, the specific objectives is to investigate core technologies for:

  • Implementing and optimising our core 3D matching algorithm
  • Mounting the stereo camera system on a robotic arm for active inspection
  • Devising self-supervised or unsupervised deep-learning technologies for detecting defects

Packing and folding of garments using cobots- Dr Gerardo Aragon Camarasa

Our current research outputs have shown that it is possible to manipulate garments of various shapes and materials. However, further research is required to increase the technology readiness to address the challenge of collaborative robotic packing and folding. This PhD project will aim to scope and investigate deep learning and AI technologies for:

  • Perceiving,
  • Quantifying
  • Categorising
  • And interpreting/understanding

the state of garments  (and human intentions and collaboration. Our current approach to the visual perception of garments can recognise and categorise garments continuously while a robot interacts with them. Thus, this research will further investigate the feasibility of our approach and establish a codebase to underpin an investigation into dexterous garment manipulation for folding and packaging based on clients’ requirements. Furthermore, human collaboration at different levels will be scoped in order to “teach” the robotic system how to dexterously manipulate garments and establish human-robot interactions for dexterous folding and packaging. This project will benefit from using our robotic facilities, which include 2 cobots and a dual-arm industrial robotic system, plus access to a range of depth cameras before a proof-of-concept is deployed.

Textile and Fabric Digital Twin- Dr Gerardo Aragon Camarasa

This project objective will be to develop a digital twin that captures the physical and dynamic properties of fabrics and textiles via differentiable garment simulators (e.g. https://youtu.be/WWmWuhJcPYY). Current differentiable simulator technologies have been developed for the gaming and animation industry. Therefore, the challenge for this project will be to devise a methodology to capture the physical and dynamic properties of garment materials when a robot manipulates them. Thus far, the CVAS group has benchmarked different differentiable simulators for robotic garment manipulation using our state-of-the-art garment datasets. These datasets comprise robotic and human interactions with garments and the interaction of external forces such as wind.

This is a high-risk, high-reward project idea which could benefit not only the packing and folding of garments (see related project) but can potentially reduce design prototyping costs and waste, speed-up pattern design and provide a simulated environment where AI technologies and robots can be trained for other manufacturing operations.

Self-Aware Autonomous Robotic Systems for On-Demand Manufacturing- Dr Gerardo Aragon Camarasa

This PhD project aim to create a self-aware computational substrate to enable autonomous robotic systems to adapt their physical configuration and prior knowledge to on-demand manufacturing. This project will explore the hypothesis that a self-aware robot can generate its rewards autonomously and these rewards can be utilised to learn and adapt to similar and new manufacturing operations. Humans can adapt to different tasks because we can capitalise on the awareness of our body and prior knowledge [4]. We can also recognise and situate ourselves in unpredictable and unconstrained environments, e.g. on-demand, customisable manufacturing processes. In contrast, current autonomous robotic systems excel at one repetitive task. They must be reprogrammed if the manufacturing operation changes and cannot adapt to new tasks and environments, still less to on-demand manufacturing. Therefore, the solution is to equip a robot with a self-aware computational substrate that can enable the robot to embody its operating self within the environment and to allow it to adjust its prior knowledge to similar and new manufacturing tasks.

Intelligent manufacturing enterprises are underpinned by the convergence of highly connected operational technology systems and data-driven business insights, enabling self-diagnosing and self-monitoring of manufacturing processes to bootstrap productivity and operation [1]. In the UK, 80% of the manufacturing community is classed as Small and Medium Enterprises (SMEs)[2], where manufacturing operations comprise bespoke and discrete systems and evolve according to the clients’ needs. These bespoke and discrete systems limit SMEs’ adoption of industry 4.0 technologies, such as the use of autonomous robotic systems. For example, SMEs in the manufacturing sector observed a 0.1% reduction in median turnover growth during the COVID pandemic because the lockdown impacted their operations, and SMEs could not adapt their manufacturing systems to the new supply chain requirements [3]. Therefore, this project will investigate self-awareness in autonomous robotic systems, enabling a robot to adapt and capture the operational know-how of on-demand, customisable manufacturing to bootstrap SMEs’ productivity and resilience and realise the true potential of responsive supply chains.

[1] Pervez, MR, et al., 2022. Journal of Manufacturing Systems, 62;

[2] GOV.UK, Business population estimates for the UK and regions 2021;#

[3] Hurley, J., et al., 2021. Impacts of the Covid-19 crisis: evidence from 2 million UK SMEs;

[4] Kwiatkowski, R., et al., 2019. Science Robotics, 4(26).

Contact: email web


Reinforcement learning in robotics manipulation using visual feedback - Dr J Paul Siebert 

The current state-of-the-art in robotics has reached the stage where affordable standard hardware platforms, such as arms and manipulators, controlled by powerful computers, are now available at relatively low cost. What is holding back the development of general purpose robots as the next mass-market consumer product is the current approach to painstakingly designing every behaviour control algorithm necessary to operate the robot for each task it is required to perform.

The objective of this research is to investigate methods that allow the robot to discover how to undertake tasks, such as manipulating objects, by itself by observing the outcome of hand-eye-object interactions. In this approach we propose to by combine visual representations of the scene observed by the robot with manipulator interaction to allow the machine to discover, i.e. learn, how to manipulate objects in specific situations, e.g. how to manipulate cloth to flatten it, or how to unfold or fold clothing, grasp rigid objects of different shapes etc., or how to grasp non rigid objects, or objects of widely differing shapes.

The project would comprise investigating:

  • Appropriate visual representations for surfaces and objects which can be extracted from images captured by the robot's cameras using computer vision techniques – we already have appropriate 3D vision systems able to represent clothing and certain classes of object.
  • The relationship between the physical action applied by a manipulator and the outcome of this action as determined by an algorithm that ranks to what degree the applied action has brought the observed world-state towards a desired world-state, e.g. by how much did this flattening action make the cloth flatter, or did this grasping action allow the griper to grab the cup?
  • Action sequences and their overall consequences for carrying our a task: for example, in non-monotonic learning strategies, early actions may actually reduce any intuitive measure of task progress prior to achieving larger gains later in the task (and therefore achieves greater gains overall than a behaviour that attempts to achieve the same goal by means of monotonic, and incremental improvements).

Our research group has excellent robot facilities on which this project will be based, including Dexterous Blue (housed in it's own laboratory on Level 7 of the Boys Orr building) – a large two armed robot richly sensorised with a high resolution (sub-millimetre) binocular vision system, lower resolution Xtion RGBD sensors, in-gripper tactile sensing and microphonic sensing for clothing and surface texture perception.

Our research using Dexterous Blue can be seen in action at: https://cordis.europa.eu/project/id/288553. Access to the Baxter robot, situated in the Computing Science foyer ,will also be available and and example of a student project using Baxter can be viewed at: https://www.youtube.com/watch?v=zyzaY4ur8As

Contact: email web


Cognitive Visual Architectures for Autonomous and Robotic Systems - Dr J Paul Siebert

How well a robot senses the world in effect defines the types of activity it is able to undertake. Vision is the richest of the senses, and recent advances in computer vision have enabled robots to perceive the world sufficiently well to be able to map their environment, localise their position and recognise objects, other robots and the robots own actuators. As a result robots are now capable of navigating autonomously and interacting with specific objects (whose appearance has been learned) or classes of object, such as people or cars, again learned from large image databases.

Despite this progress the state-of-the-art in robot vision is still incomplete in terms of being able to represent objects richly, whereby a full set of visual characteristics are extracted and represented to feed higher reasoning processes. The class of an object gives a noun (e.g. human, car, cup, etc.) while object attributes such as colour, texture and spatial location and motion provide adjectives (blue car, distant person). Shape, geometry, motion and other visual properties discovered through interaction can provide affordances (verbs from the robot's perspective) – what the robot object can do with the object (move it by pushing, knock it over, lift it etc.). These object properties discovered by means of vision and interaction can then be used in reasoning processes, the classic being the ability of a machine to respond to the command “grasp the red box closest to the green bottle”. Therefore to be truly autonomous, a robot must be capable of understanding a scene and be able to reason about it. To achieve such cognitive ability it is necessary to couple sensed visual attributes (and potentially attributes obtained from other sensing modalities) to a reasoning engine to allow action plans to be deduced that allow the robot to achieve goals without the need for wholly pre-programmed interactions actions.

There are potentially a number of projects that could be based on the above theme of vision for robots and other autonomous systems, based on the following investigations:

  • Visual processing architectures to support the extraction of a rich set of visual attributes (2D and 3D) from images captured by the robot's camera systems. This might include space-variant (foveated) visual architectures, as found in the mammalian visual system, supporting attention control and real-time execution.
  • Reasoning and planning systems for controlling interaction of the robot with the environment coupled with visual attention and perception, binding visual perception to action and learning.

Our research group has excellent robot facilities on which this project can be based, including Dexterous Blue (housed in it's own laboratory on Level 7 of the Boys Orr building) – a large two armed robot richly sensorised with a high resolution (sub-millimetre) binocular vision system, lower resolution Xtion RGBD sensors, in-gripper tactile sensing and microphonic sensing for clothing and surface texture perception. Our research using Dexterous Blue can be seen in action at: www.clopema.eu. Access to the Baxter robot, situated in the Computing Science foyer, will also be available and and example of a student project using Baxter can be viewed at: https://www.youtube.com/watch?v=zyzaY4ur8As

Contact: email web

_________________________________________________________________________________

Deep generative models of images, videos and 3D scenes - Dr Paul Henderson

Deep generative models such as GANs, VAEs and diffusion models excel at generating realistic images. They can also be applied to videos, and even to 3D shapes or entire scenes. However, they do not typically have an interpretable latent space allowing easy, direct manipulation of the scene. As an alternative approach, the structured VAE paradigm learns an encoder jointly with the generative model, and explicitly separates the latent representation of an image into different objects and other aspects such as lighting. The decoder then explicitly enforces the semantic meaning of these latent variables by processing them with a combination of hand-engineered (e.g. differentiable renderers) and learnt components (i.e. neural networks). This means that not only can the latent representation be intuitively manipulated to control the generated scene, but the encoder can be used to decompose scenes into their constituent objects, and reconstruct their 3D shapes.

In this project, we shall extend deep generative models -- specifically, structured VAEs -- to new domains such as 3D games and street scenes, where there are diverse object apperances, complex interactions among objects, and large-scale environments. This will mean combining classical computer vision techniques (e.g. motion segmentation to discover objects) with state-of-the-art deep generative models and differentiable rendering.

References:

"Unsupervised object-centric video generation and decomposition in 3D", Henderson & Lampert, NeurIPS '20

"AutoRF: Learning 3D Object Radiance Fields from Single View Observations", Mueller et al., CVPR '22 "Leveraging 2D Data to Learn Textured 3D Mesh Generation", Henderson et al., CVPR '20

Contact: email

_________________________________________________________________________________

Modelling Close Human-Human and Human-Object Interactions for Human Digitization - Dr. Edmond S. L. Ho

The aim of this project is to propose new methods for modelling the close interactions between human-human and human-object. Such an approach can be used for tackling problems in a wide range of tasks, including scene understanding, pose estimation and 3D human reconstruction in Computer Vision, as well as synthesizing interactive contents in Computer Graphics and Virtual Reality.

Analysing the relationships between human-human and human-object from images plays an important role in providing contextual information in addition to the low-level features (such as key points on the human and object). Although encouraging results are demonstrated by using data-driven and deep learning techniques in recent years, handling scenes which contains close interactions between human and objects is still a challenging task since the key entities (human(s) and object(s)) are usually partially occluded and resulted in low-quality input data. In this research, we will bridge this gap by utilising prior knowledge in close interactions to better model the human-human and human-object interactions.

The candidate is expected to have strong programming skills, some prior experience in machine learning and visual computing (computer vision and/or computer graphics), and good English communication skills.

The supervisory team has extensive experience in this area and the details of the relevant publications can be found here: http://www.edho.net/projects/close_interaction/

Contact email

______________________________________________________________________________________________

Video-based Prediction and Classification of Neurological Disorders – Dr. Edmond S. L. Ho

The aim of this project is to propose new machine learning frameworks for analysing body movement from videos. This project will focus on modelling human motion data in the spatiotemporal as well as frequency domains. The new machine learning can be applied to predict neurological disorders from RGB videos. Our team has demonstrated the potential in our preliminary studies, such as:

- automating General Movements Assessment (GMA) as an early prediction of Cerebral Palsy (CP), with real-world data collected from our clinical collaborators

- classifying musculoskeletal and neurological disorders among older people based on skeletal motion data

- Tremor Classification for Parkinson’s Disease Diagnosis from Video

The candidate is expected to have strong programming skills, some prior experience in machine learning and visual computing (computer vision and/or computer graphics), and good English communication skills.

The interdisciplinary supervisory team has extensive experience in this area and a strong network with clinical researchers and clinicians in the UK. The details of the relevant publications can be found here: http://www.edho.net/projects/babies/ and http://www.edho.net/projects/healthcare_mot/

Contact email Shu-Lim.Ho@glasgow.ac.uk