Overview

 

Stained glass image for the GLASS research sections's page.The GLAsgow Systems Section (GLASS) researches parallel and distributed systems, networked systems, and (safety-critical) software systems. The section is currently led by Dr Jeremy Singer. We have a strong focus on real-world systems, and cover all scales and across the hardware-software spectrum. We contribute to, develop, and release open-source research software. There are several research groups and labs within the section:

Much of the research we undertake is collaborative and has industrial partners. We work closely with other groups in Computing Science as well as other schools including Engineering. We also work closely with other world-leading Universities and many private and public sector organisations (recently: Airbus, Arm, Cisco Systems, EDF, Ericsson, IETF, Microsoft Research, NASA).

Members of GLASS contribute to several of the school's cross-cutting research themes including: 

Section members

Academic Staff:

Research Staff:

Associate Staff:

 

Honorary Staff:

 

Research Students:

School of Computing Science PGR Student list

  • Abdulrahman Khalid A Alshememry
  • Anthony Rainey
  • Arwa Hameed Alsubhi
  • Bishal Ghosh
  • Boning Zhang
  • Charles Varley
  • Elizabeth Boswell
  • Hongyun Sheng
  • Jacob Roberts
  • James Nurdin
  • Jiabo Shi
  • Jinming Yang
  • Jude Haris
  • Kai Feng
  • Kathleen West
  • Kelsey Collington
  • Martin Nahalka
  • Muhammad Arif
  • Naila
  • Nicholas Morris
  • Ohud Abdullah F Alasmari
  • Omodolapo Victor Babalola
  • Rappy Saha
  • Rech Leong
  • Robert Szafarczyk
  • Ruomeng Xu
  • Saleh Abdullah M Alfahad
  • Shijia Dong
  • Shivani
  • Sundas Rafat Mulkana
  • Teodor Karkashina
  • Vivian Band
  • Wenhao Hu
  • Xiangmin Xu
  • Xicheng Li
  • Yuting Wan
  • Yuxin Qin
  • Zhuoran Tan

Projects

Current Projects:

Past Projects:

Members of GLASS built YewPar: software aimed at providing general-purpose, distributed memory, parallel skeletons for combinatorial search problems, e.g. finding the largest clique in a graph. 

Seminar series

Systems seminars are usually held on Tuesdays. Everyone from the University of Glasgow and beyond is welcome to attend these talks - see the Events tab for more details. We are happy to hear from anyone that would like to visit us to give a talk.

The Systems seminar is coordinated by Mr Duncan Lowther.

News and Highlights

October 2024

  • Dr Lauritz Thamsen won an EPSRC New Investigator Award to work on Carbon-Aware Scalable Processing in Elastic Clusters (Casper) from early 2025. The individual research project will be driven by a PDRA/RA and will also involve GLASS PhD student Kathleen West and academic Yehia Elkhatib. The industry and academic partners of the project are AWS, BBC R&D, and Humboldt University of Berlin. 

August 2024

  • We have a new advanced course on scalable and sustainable Cloud Systems, run by Dr Yehia Elkhatib and Dr Lauritz Thamsen in the upcoming academic year. It is the first advanced course with a strong focus on sustainable computing offered by our School!

  • A new PhD project with Barclays starts this month! James Nurdin has started a PhD co-supervised by both GLASS (Dr Lauritz Thamsen) and IDA sections, and funded by Barclays, to optimise data services for sustainability using machine learning.

July 2024

RESEARCH GROUPS AND LABS IN GLASS

Research software

Members of the Systems section helped design and build Glasgow Parallel Haskell (GpH). It's one of the early robust parallel functional languages, and remains one of the most widely used parallel Haskell models, e.g. the most popular Haskell compiler, GHC supports it on multicores. The sophisticated GUM runtime system supports GpH on distributed-memory machines like clusters. The new GUMSMP runtime system supports GpH on hierarchical architectures like NUMAs or clusters of multicores. 

Members of the Systems section helped design and build Haskell distributed parallel Haskell (HdpH). It's a parallel Haskell for large scale distributed-memory machines like clusters or HPC platforms. Crucially, HdpH is implemented in vanilla (GHC) Haskell. 

Glasgow Network Functions (GNF) - Members of the section have developed an open-source, container-based Network Function Virtualization (NFV) framework that allows the transparent attachment of virtual Network Functions (NF)s to selected traffic in Software-Defined Networks.

Extending the matching abilities of OpenFlow - Members of the section have developed a protocol-independent, flexible alternative to today’s OpenFlow fixed match fields based on the Berkeley Packet Filters (BPF) for packet classification.

SDN-based Virtual Machine Management for Cloud Data Centers - Members of the section have developed a SDN-based software orchestration framework for live Virtual Machine (VM) management that exploits temporal network information to migrate VMs and minimise the network-wide communication cost of the resulting traffic dynamics.

GLASS video

An overview of our GLASS Research Section from Professor Dimitrios Pezaros.

Events this week

There are currently no events scheduled this week


Upcoming events

Systems Teaching Discussion

Group: Systems Seminars
Speaker: Dr Jeremy Singer
Date: 26 November, 2024
Time: 14:00 - 15:00
Location: SAWB 422 and online -- https://uofglasgow.zoom.us/j/86057258886?pwd=y2kOP3VMnErustCF9dtN6hHIVZ4VvH.1

Systems topics form a core spine of our undergraduate Computing Science curriculum. In this open discussion forum, we will explore the current systems topics that are mandatory for all students and the options available for honours students. We will examine the connections between our courses, identifying areas for rationalization. We will also consider possible new topics to be introduced, particularly in light of our new Cybersecurity masters programme.


Past events

What we do in the shadows: using temporal networks for data analysis on cryptocurrency and social media (19 November, 2024)

Speaker: Dr Richard Clegg

A large number of datasets can be viewed as temporal networks: nodes that are linked by an event occurring at a particular time. Examples are numerous and include: conversations (on social media or in real life) where an event is a node (person) talking to another at a given time; trade networks where an event is an exchange of money between two nodes (people or organisations) and citation networks where nodes are papers and an event is a paper published at a given time citing another paper. Raphtory is temporal networks software originally written at Queen Mary to analyse temporal networks at scale efficiently. The company Pometry was founded to develop and market the open source Raphtory software.

This talk looks at a number of examples of research using temporal networks and the insight that can be gained from looking at data efficiently on a number of time scales and as it changes in time.  In specific, I will show at how temporal networks can provide insights into some suspicious behaviour in cryptocurrency markets and NFT trading. I will also show analysis of the alt-right online social network Gab and show how the growth of that network was driven by real-world events of interest to right wing individuals particularly in the US.  

Dr Richard Clegg is a Senior Lecturer in Networks at Queen Mary University of London. His PhD in mathematics and statistics focused on dynamic networks and this has remained his research interest to this day. His published work largely uses data analysis and in particular methods from complex networks to analyse real-world data sets. He is also interested in creation of synthetic network data sets for privacy preserving data sharing. 


Jamf for Systems Researchers: Is Big Brother Watching? (05 November, 2024)

Speaker: Dr Jeremy Singer

The University IT Services team is encouraging all macOS users to adopt the new “MyMac” remote device management service - see https://www.gla.ac.uk/myglasgow/it/desktops/mymac/ - from the Jamf corporation. For the past few months,  I have been playing with Jamf on a managed MacBook. In this short talk, I will present my findings including: performance overheads (minimal), user experience (slightly more inconvenient), workarounds (several) and data retention policies (unclear).


How do we explore SDOs through open data? Tour around the tooling in the Sodestream project (29 October, 2024)

Speaker: Dr Ryo Yanagida

SDOs such as IETF have a lot of publicly available data. The Sodestream project has been looking at those data to gain understanding of the IETF through those data. This talk will present an updated overview of how we explore those data through the combinations of tools we use and what you could potentially do with the toolchains we built over the years. 


Dynamic Loop Fusion in High-Level Synthesis (15 October, 2024)

Speaker: Robert Szafarczyk

High-Level Synthesis (HLS) compilers generate hardware from an untimed, loop-based C program. Just as traditional optimising compilers, HLS tools try to discover instruction- and memory-level parallelism in a sequential program, but with the caveat that they can tweak the underlying architecture to suit their goals. To support irregular codes, recent HLS research has moved from statically scheduling operations at compile time to adding various microarchitectural structures to enable dynamic scheduling at runtime. One such structure is the Load-Store Queue (LSQ), which can detect memory aliasing at runtime and increase throughput of single loops. However, most irregular codes consist of multiple loops that designers would like to execute in parallel. This is not possible with existing LSQ implementations, nor is it possible to use traditional loop fusion due to its restriction to affine loops.

In this talk, I will describe a compiler/hardware co-design approach to fuse loops dynamically in hardware. I will present a new program-order schedule, inspired by polyhedral compilers and optimized for hardware, that can be parallelized (in contrast to the Program Counter used in LSQs). Our only requirement is that addresses form monotonically non-decreasing functions -- a much weaker requirement than affine functions. I will show the formalism that optimising compilers use to reason about loop expressions and what it means exactly for a loop to be affine vs. monotonic.


Optimisation of Convolutional Neural Networks (01 October, 2024)

Speaker: Dr Nikela Papadopoulou

Convolutional neural networks (CNNs) play a critical role in modern AI, commonly used for image and vision tasks. Efficient execution of CNNs is vital for inference serving, which is a major source of computing cycles in the cloud. While GPUs are commonly used for accelerating CNNs, CPU-based inference offers higher availability and flexibility, especially with the growing parallel processing capabilities of long vector architectures. These architectures enable CPUs to handle the computational demands of CNNs more efficiently, but co-design of the underlying hardware and algorithms is essential to maximize performance.

In this talk, we will explore the optimization of CNNs on RISC-V  vector architectures, focusing on three key algorithms: Direct, im2col+GEMM, and Winograd. Through a performance analysis and a co-design study of vector lengths, cache sizes, and algorithmic variants, we demonstrate how algorithm selection can boost performance and throughput for models like VGG16 and YOLOv3, while also addressing performance/area tradeoffs.


Estimating and improving image quality for humans and AI using optimization, generative models, and vision language models (10 September, 2024)

Speaker: Takamichi Miyata

The widespread use of smartphones has made it common for us to communicate using images in our daily lives. Furthermore, in recent years, images are used not only for communication between humans, but also for various advanced decisions using AI. Degradation of these images due to noise generated during image acquisition and compression coding can degrade the quality of communication using these images and the recognition accuracy of AI.
This presentation will introduce the results of research to date related to estimating the perceptual quality of images containing degradation and removing the degradation. Mathematical optimization such as tensor restoration, as well as how deep learning, especially diffusion image generation models and vision language models, can be applied to this task will be explained in detail.


Network Infrastructure to Realize City as a Service (09 September, 2024)

Speaker: Sumiko Miyata

My research area is network infrastructures to realize "City as a Service" in smart cities, where city services can be used like applications on a smart phone. In order to realize various services in City as a Service, large amounts of monitoring data need to be processed quickly and reliably. My research interests include traffic control and information security. In this presentation, we will describe network infrastructure technologies to realize City as a Service from the viewpoint of traffic control and information security.  
 This presentation will also introduce our efforts to realize a secure data management system using a start-up grant from the Tokyo Metropolitan Government, and examples of our participation in overseas exhibitions related to our proposed network infrastructure.


How debuggable is your (compiler-optimised) program? (03 September, 2024)

Speaker: Stephen Kell

Source-level debugging of compiled code only works when compilers generate
the necessary metadata. Currently, that means it rarely works well, at
least in optimising ahead-of-time compilers like LLVM and GCC. I'll give an
overview of how compiler-generated metadata enables source-level debugging,
the challenges of making it work for optimised code, and our recent work on
doing better. Whereas compilers have so far taken a "best-effort" approach
with no particular correctness criterion, I'll outline a correctness
condition for local variable information that seems to balance the relevant
trade-offs. I'll then describe a tool we've built that can use this to
mechanically find valid LLVM bugs capturing avoidable losses or corruptions
of debug info. A theme will be how the textbook framing of compiler
optimisations as "eliminating" code or variables could be more
constructively thought of as "residualising" them into debug info; I'll
finish with some thoughts on what that could mean for how compilers are
built. All this is joint work with J. Ryan Stinnett.


Stephen Kell does practical research on programming systems, with the the goal of making
computers work for human beings and not vice-versa, and a focus on infrastructure
software including operating systems and language runtimes. Some past and present
research topics include: realistic formal and metaprogrammable specifications of
operating systems' linking, loading and system call interfaces; reflective run-time
services in Unix-like processes; using the latter to provide new kinds of dynamic
checking in C and other 'unsafe' languages; and making debugging of optimized code more
reliable. Currently he is a Lecturer (Assistant Professor) in Computer Science at King's
College London.


DNN64: An ML Compiler Toolchain for the Nintendo 64 (26 June, 2024)

Speaker: Perry Gibson

Title: DNN64: An ML Compiler Toolchain for the Nintendo 64

Abstract: In recent years, Deep Neural Networks (DNNs) have driven innovations in the computer hardware space, due to their increasingly high computational and memory demands. However, as shown by the TinyML community, DNNs can also operate on modest hardware such as micro-controllers such as microcontrollers through careful hardware-software-DNN co-design.

This talk explores running DNNs on retro hardware using modern tools, specifically on Nintendo’s 1996 video game console, the N64. The proposed “DNN64” is a compiler tailored for the N64, built using a modified Apache TVM compiler.

Perry will link the project to his PhD topic (“Compiler-centric Across-stack Deep Learning Acceleration”), and talk about things such as the goodies found in old Silicon Graphics manuals, how compilers aren’t scary (they’re just a bunch of print statements), the co-design problems that emerge when running DNNs on a whole 4MB of memory, and writing kernels for the N64’s proto-GPU (✨the Reality Signal Processor🪄).

Speaker Bio: Perry recently completed his PhD with Dr José Cano Reyes, and is currently exploring employment opportunities in the DNN compiler space.


Zoom link:
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09

Meeting ID: 861 8632 5698
Passcode: 803628


Federated PCA on Grassmann Manifold for IoT Anomaly Detection (21 May, 2024)

Speaker: Dr Nguyen H. Tran

This systems seminar is delivered by Dr Nguyen H. Tran from the University of Sydney, who is hosted by Nguyen Truong. 

Please find the details below: 

Title: Federated PCA on Grassmann Manifold for IoT Anomaly Detection

Abstract: With the proliferation of the Internet of Things (IoT) and the rising interconnectedness of devices, network security faces significant challenges, especially from anomalous activities. While traditional machine learning-based intrusion detection systems (ML-IDS) effectively employ supervised learning methods, they possess limitations such as the requirement for labelled data and challenges with high-dimensional data. Recent unsupervised ML-IDS approaches like AutoEncoders and Generative Adversarial Networks (GAN) offer alternative solutions but pose challenges in deployment onto resource-constrained IoT devices and in interpretability. To address these concerns, this paper proposes a novel federated unsupervised anomaly detection framework -- FedPCA -- that leverages Principle Component Analysis (PCA) and the Alternatives Directions Method Multipliers (ADMM) to learn common representations of distributed non-i.i.d. datasets. Building on the FedPCA framework, we propose two algorithms, {FedPE} in Euclidean space and {FedPG} on Grassmann manifolds, and analyze their convergence characteristics. Our approach enables real-time threat detection and mitigation at the device level, enhancing network resilience while ensuring privacy. Experimental results on the UNSW-NB15 and TON-IoT datasets show that our proposed methods offer performance in anomaly detection comparable to non-linear baselines, while providing significant improvements in communication and memory efficiency, underscoring their potential for securing IoT networks.

Bio: Nguyen H. Tran is an Associate Professor at the School of Computer Science, at the University of Sydney. He was an Assistant Professor with the Department of Computer Science and Engineering, Kyung Hee University, from 2012 to 2017. He received BS and PhD degrees from HCMC University of Technology and Kyung Hee University in 2005 and 2011, respectively. His research group has special interests in Distributed compUting, optimizAtion, and machine Learning (DUAL group). He received several best paper awards, including IEEE ICC 2016 and ACM MSWiM 2019. He receives the Korea NRF Funding for Basic Science and Research 2016-2023, ARC Discovery Project 2020-2023, and SOAR award 2022-2023. He serves as an Editor for several journals such as IEEE Transactions on Green Communications and Networking (2016-2020), IEEE Journal of Selected Areas in Communications, and IEEE Transactions on Machine Learning in Communications Networking.

Zoom link: https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09

Meeting ID: 861 8632 5698
Passcode: 803628


Could Erlang-Style Supervision Improve the Availability of Microservice Systems? (07 May, 2024)

Speaker: Jacob Roberts

The second half of the Systems Seminar is delivered by Jacob Roberts.

Please find the details below:

Title: Could Erlang-Style Supervision Improve the Availability of Microservice Systems?

 

Abstract:

Microservices are a popular software architecture used by high-profile companies like Netflix, Uber, etc. The Kubernetes platform uses probes to check the health of containers. The probes must be carefully configured and may be slow to detect failure. In contrast, in Erlang supervision, child processes signal failure to a supervisor process that can take corrective action. Potentially, signalling could reduce failure detection time and be easier to configure than probes.

This talk presents an initial investigation into Erlang-style supervision of Kubernetes microservices. It outlines a prototype supervisor implementation for Kubernetes and presents experimental results comparing the new scheme to Kubernetes probes.


 

Zoom link:
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09

Meeting ID: 861 8632 5698
Passcode: 803628

 


Safe Human Robot Collaboration (07 May, 2024)

Speaker: Sundas Rafat Mulkana

The first part of the systems seminar is delivered by Sundas. 

Please find the details below:

Title: Safe Human Robot Collaboration
Abstract: The future direction of collaborative robots is shifting from having predefined rules for interaction between human and robot in structured environments, such as industrial settings, to achieving more flexibility in their actions in unstructured environments, such as households and public places. These futuristic robots, trained on learning-based methods, show promising results in simulation and controlled environments. However, when deploying these robots in the real-world ensuring human physical and cognitive safety raises major concerns. This necessitates the development of methods which provide provable safety constraints on robot motion which makes it cognizant of its proximity to humans while performing a task, thus providing formal guarantees that the robot trained on learning-based methods will not come in harmful contact with human. Additionally, studying the effect of these safety constraints on task performance and the ease of collaboration through user feedback in joint action tasks between human and robot would further help improve robot behavior. The primary objectives of this research are to develop methods that prove safety without compromising the performance of the robot, training robot to dynamically react to human behavior in the real world and identify human-preferred robot motions for collaborative tasks. This research's outcomes aim to contribute significantly to the integration of safe collaborative robots in social, healthcare, and industrial environments. By addressing the critical need for safety guarantees and naturalness in human-robot interaction, this work aims to pave the way for safer and more effective human-robot collaboration.
 

Zoom link:
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09

Meeting ID: 861 8632 5698
Passcode: 803628


RIPEn at Home Surveying Internal Domain Names using RIPE Atlas (30 April, 2024)

Speaker: Elizabeth Boswell

The second half of the Systems Seminar is delivered by Elizabeth Boswell.

Please find the details below.

Title: RIPEn at Home Surveying Internal Domain Names using RIPE Atlas

Abstract: 

Internal domain names are domain names that are only valid in a local network. For example, many home networks use an internal name, such as "gateway.home", to refer to the home gateway/home router. Queries for these names are resolved by the home gateway and should not be sent to the global DNS. 
 
name collision occurs if an internal name also exists in the global DNS, a query for the internal name is accidentally sent to the global DNS, and the response differs from the local response. This can happen, for example, if queries are accidentally sent to a public resolver. Name collisions can lead to security issues, as the global DNS domain name can be used to spoof the local device.
 
While previous studies of name collisions used passive measurement data, we use active measurements on RIPE Atlas to survey the use of internal names in home networks. We discover 3092 names, used by 4305 RIPE Atlas probes, of which 2.13% are at acute risk of name collision, and 34.51% are at risk of collision if their top-level domain is delegated.
 

Zoom link:
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09

Meeting ID: 861 8632 5698
Passcode: 803628


Informing, instructing, or ignoring: challenges and considerations for designing machine learning software for users in CNI. (30 April, 2024)

Speaker: Kelsey Collington

The first part of the Systems seminar on 30/04/2024 is delivered by Kelsey Collington.

Please find the details below:

Title: Informing, instructing, or ignoring: challenges and considerations for designing machine learning software for users in CNI.
 
Abstract: Successful integration of machine learning based software to help protect, detect and respond to cyber incidents is one means of enhancing digital resilience. Introducing such software facilitates human-machine interaction through supporting higher levels of machine automation. Importantly, existing research highlights human-machine interaction needs to be tuned to the particular groups of people involved in the human-machine interaction.
 
This research focuses on CNI organisations that tend to be risk adverse and prioritise the safe running of physical processes. As a result, these CNI organisations approach cyber security from a different perspective to non-safety critical CNI organisation. This is a group of end users that is underrepresented within existing research into human-machine interaction, and therefore there is a lack of understanding as to the challenges and considerations of designing machine learning software for these end users. To address this research gap, I have been conducting semi-structured interviews with personnel from the nuclear industry. Interviewees have experience with industrial control systems and cyber security. In this talk I will be discussing some of the preliminary findings of these interviews. I will then discuss how this research builds upon a related body of existing research and lays the foundation for future research directions.
 

Zoom link:
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09

Meeting ID: 861 8632 5698
Passcode: 803628


Exploring 3D Human Pose Estimation and Forecasting from the Robot’s Perspective: ​The HARPER Dataset ​ (16 April, 2024)

Speaker: Dr Emma Li

This systems seminar is delivered by Dr Emma Li, a staff member of GLASS.

Please find the detials below.

Title: Exploring 3D Human Pose Estimation and Forecasting from the Robot’s Perspective: The HARPER Dataset
 
Abstract: In this talk, I will introduce our recent work on human-robot interaction, dataset HARPER, a novel dataset for 3D body pose estimation and forecast in dyadic interactions between users and Spot, the quadruped robot manufactured by Boston Dynamics. The key-novelty is the focus on the robot’s perspective, i.e., on the data captured by the robot’s sensors. These make 3D body pose analysis challenging because being close to the ground captures humans only partially. The scenario underlying HARPER includes 15 actions, of which 10 involve physical contact between the robot and users. The Corpus contains not only the recordings of the built-in stereo cameras of Spot, but also those of a 6-camera OptiTrack system (all recordings are synchronized). This leads to ground-truth skeletal representations with a precision lower than a millimeter. In addition, the Corpus includes reproducible benchmarks on 3D Human Pose Estimation, Human Pose Forecasting, and Collision Prediction, all based on publicly available baseline approaches. This enables future HARPER users to rigorously compare their results with those we provide in this work.
 
Bio: Emma Li is a Lecturer in Responsible & Interactive Artificial Intelligence at the School of Computing Science, University of Glasgow (UofG), UK. Prior to joining UofG, she was a senior lecturer at the Northumbria University Newcastle, UK. She visited Georgia Institute of Technology, GA, USA, in 2008-2010 and Lehigh University, PA, USA, in 2016. She leads the Interactive and Trustworthy AI lab working on human-robot interaction and cyber security. Her research goal is to accelerate human-robot partnership into industry and society. She has successfully delivered 6 projects and published 50 peer reviewed papers. Her recent work on robot behaviour-based user authentication received a best workshop paper award in IEEE INFOCOM 2021.
 

Zoom link:
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09

Meeting ID: 861 8632 5698
Passcode: 803628



Bit-shift accelerators for non-uniform quantization (12 March, 2024)

Speaker: Rappy Saha

This systems seminar is delivered by Rappy Saha. 
 
Please find the details below:

Title: Bit-shift accelerators for non-uniform quantization.
 
Abstract: 
Power-of-Two quantization (PoT) represents a non-uniform quantization approach wherein data, such as weights or activations, is quantized in a power-of-two format(2N). This format offers distinct advantages, primarily in terms of memory saving and decreased communication overhead with memory. Additionally, it facilitates computational efficiency by allowing the replacement of multiplication operations with shift operations. This optimization reduces the computational cost associated with computations, making them more resource-efficient.
 
Prior studies have showcased the utility of PoT quantization in accelerating machine learning (ML) models, introducing various methodologies and corresponding bit-shift accelerators. While some PoT strategies may enhance efficiency for certain ML models, this advantage isn't universal. Additionally, many existing schemes and accelerators are proprietary and fail to support a broad array of PoT quantization approaches.
 
In this work we target the development of versatile and efficient bit-shift accelerators capable of accommodating a diverse range of PoT quantization techniques. Our main objective is to design open-source bit-shift accelerator solutions that seamlessly integrate with widely-used ML frameworks, such as TF-Lite.

Zoom link: Meeting ID: 861 8632 5698 
Passcode: 803628


Special Systems Seminar (07 March, 2024)

Speaker: Abdullah Giray and Geraldo F. Oliveira

This systems seminar talk speakers are kindly organised by Jose Cano Reyes.

The speakers are both PhD candidates from ETH Zurich. 
Due to the timing, this session runs longer than usual — please note the start and end time. 


Please find the details below:

Speaker: Abdullah Giray

Title: Enabling Efficient and Scalable DRAM Read Disturbance Mitigation via New Experimental Insights into Modern DRAM Chips
 
Abstract:
DRAM is the prevalent main memory technology due to its high density and low latency characteristics. The increasing need for faster access rates and larger DRAM capacity motivates improving the DRAM chip density. Manufacturing technology node size shrinks over DRAM chip generations to provide higher DRAM chip density. This technology scaling causes DRAM cell size and cell-to-cell distance to reduce significantly. As a result, DRAM cells become more vulnerable to read disturbance, i.e., accessing a DRAM cell disturbs data stored in another physically nearby cell.
 
To provide a deeper understanding of and solutions to DRAM read disturbance, we 1) conduct experimental studies on real DRAM chips where we investigate the effects of temperature, access patterns, intra-chip variations, and wordline voltage; and 2) propose architecture-level solutions to mitigate DRAM read disturbance while it is exacerbated by technology node scaling and existing mitigations face practicality challenges due to a fundamental need for exposing proprietary information. This talk will provide a summary of these works.
 
Bio:
Giray is a Ph.D. candidate in the Safari Research Group at ETH Zürich, working with Prof. Onur Mutlu. His current broader research interests are in computer architecture, systems, and hardware security with a special focus on DRAM robustness and performance. In particular, his PhD research focuses on understanding and solving DRAM read disturbance vulnerability. Giray has published several works on this topic in major venues such as HPCA, MICRO, ISCA, DSN, and SIGMETRICS. One of these works, BlockHammer, was named as a finalist by Intel in 2021 for the Intel Hardware Security Academic Award. Giray's research is in part supported by Google and the Microsoft Swiss Joint Research Center.
 

Speaker: Geraldo F. Oliveira
Title: Methodologies, Workloads, and Tools for Processing-in-Memory: Enabling the Adoption of Data-Centric Architectures.
 
Abstract:
The increasing prevalence and growing size of data in modern applications have led to high costs for computation in traditional processor-centric computing systems. Moving large volumes of data between memory devices (e.g., DRAM) and computing elements (e.g., CPUs, GPUs) across bandwidth-limited memory channels can consume more than 60% of the total energy in modern systems. To mitigate these costs, the processing-in-memory (PIM) paradigm moves computation closer to where the data resides, reducing (and in some cases eliminating) the need to move data between memory and the processor. There are two main approaches to PIM: (1) processing-near-memory (PnM), where PIM logic is added to the same die as memory or to the logic layer of 3D-stacked memory; and (2) processing-using-memory (PuM), which uses the operational principles of memory cells to perform computation.
 
Many works from academia and industry have shown the benefits of PnM and PuM for a wide range of workloads from different domains. However, fully adopting PIM in commercial systems is still very challenging due to the lack of tools and system support for PIM architectures across the computer architecture stack, which includes: (i) workload characterization methodologies and benchmark suites targeting PIM architectures; (ii) frameworks that can facilitate the implementation of complex operations and algorithms using the underlying PIM primitives; (iii) compiler support and compiler optimizations targeting PIM architectures; (iv) operating system support for PIM-aware virtual memory, memory management, data allocation, and data mapping; and (v) efficient data coherence and consistency mechanisms. Our goal in this talk is to highlight tools and system support for PnM and PuM architectures that aim to ease the adoption of PIM in current and future systems.
 
Bio:
Geraldo F. Oliveira is a Ph.D. candidate in the Safari Research Group at ETH Zürich, working with Prof. Onur Mutlu. His current broader research interests are in computer architecture and systems, focusing on memory-centric architectures for high-performance and energy-efficient systems. In particular, his Ph.D. research focuses on taking advantage of new memory technologies to accelerate distinct classes of applications and provide system support for novel memory-centric systems. Geraldo has published several works on this topic in major conferences and journals such as HPCA, ASPLOS, ISCA, MICRO, and IEEE Micro.

Zoom link: Meeting ID: 861 8632 5698 
Passcode: 803628


SODA-OPT Compiler Frontend of the Software Defined Architectures (SODA) Toolchain (01 March, 2024)

Speaker: Nicolas Bohm Agostini

This is an ad-hoc external systems seminar organised by Jose! 

Please find the details below:

Bio: Nicolas Bohm Agostini is a Ph.D. candidate in the Electrical and Computer Engineering Department at Northeastern University (NEU, Boston, MA) and a Computer Scientist at the Pacific Northwest National Laboratory (PNNL). He completed his bachelor’s degree in electrical engineering at the Universidade Federal do Rio Grande do Sul (UFRGS, Brazil) in 2015, followed by a Master of Science in Electrical and Computer Engineering from NEU in 2022. With a strong focus on Computer Architecture and High-Performance Computing, Nicolas has gained extensive expertise in accelerating machine learning and linear algebra applications. As a passionate educator, he has taught Compilers, GPU Programming, and Embedded Robotics courses. Nicolas joined PNNL in 2020 and is the lead developer of the SODA-OPT compiler, which automates system-level partitioning of high-level applications and enables automatic code optimizations for superior custom hardware generation outcomes.

Description: We invite you to explore the work of Nicolas Bohm Agostini, a leading researcher in high-performance computing and compilers for High-Level Synthesis (HLS). In this talk, Nicolas will present SODA-OPT, the compiler frontend of the Software Defined Architectures (SODA) toolchain. SODA-OPT harnesses the power of the MLIR compiler infrastructure to automate the generation of custom hardware accelerators for applications programmed with high-level productive programming frameworks (e.g., Tensorflow, PyTorch). By employing specialized abstractions and leveraging MLIR, SODA-OPT automates the initial steps of generating custom accelerators through HLS, enabling non-experts to create efficient FPGA or ASIC designs, reducing or eliminating the need for the manual effort of an HLS expert. The compilation flow demonstrated in this talk showcases the automatic generation of accelerators for linear algebra and deep neural networks (DNN) operators. Experimental results with kernels from the PolyBench benchmark show that the SODA-OPT optimization pipeline can improve the runtime of synthesized accelerators by up to 60x. Join us to learn how the SODA-OPT compiler and SODA toolchain can enhance your accelerator development process.


Zoom link: Meeting ID: 861 8632 5698 
Passcode: 803628


The challenges of overseeing and influencing the cybersecurity of supply chains to critical infrastructure. (27 February, 2024)

Speaker: Tania Wallis

This week's Systems seminar is delivered by Tania Wallis. Please find the detail below. 

Title: The challenges of overseeing and influencingthe cybersecurity of supply chains to critical infrastructure. 

Abstract: This talk will describe an EPSRC Impact Acceleration project that is working in the interactive space between customers, suppliers and government actors to develop a partnership framework for managing shared responsibilities and cybersecurity improvements across the supply chain networks of UK essential services. This work is examining touch points with the supply chain and the tailoring of implementations towards operational technology (OT) contexts and includes end users from the energy, rail, aviation & water sectors.  


Zoom link: Meeting ID: 861 8632 5698 
Passcode: 803628


Guided Equality Saturation: Semi-automatic Term Rewriting (13 February, 2024)

Speaker: Phil Trinder

This systems seminar is delivered by Prof. Phil Trinder.

Please find the detail below:

Abstract: Rewriting is a principled term transformation technique with uses across theorem proving and compilation. While developing rewrite sequences manually is possible, this process doesn’t scale to larger rewrite sequences. Automated rewriting techniques, like greedy simplification or equality saturation, work well without human input but don't scale to large search spaces.

This talk proposes a semi-automatic rewriting technique as a means to scale rewriting by allowing human insight at key decision points. Specifically, we propose guided equality saturation that embraces human guidance when fully automated equality saturation does not scale. The rewriting is split into two or more simpler automatic equality saturation steps: from the original term to a human-provided intermediate guide, and from the guide to the target. A guide can be a complete term, or a relatively concise sketch containing undefined elements that are instantiated by the equality saturation search.

We demonstrate the generality and effectiveness of guided equality saturation using two case studies:

1. As a tactic in the Lean 4 proof assistant. Proofs are written in the style of textbook proof sketches that omit details. Compared with unguided equality saturation more properties can be proved, and the proofs execute in seconds rather than minutes.

2. In the compiler for the RISE array language unguided equality saturation fails to perform optimizations within an hour and using 60 GB of memory. Guided equality saturation performs the optimizations with at most 3 guides, within seconds, and using less than 1 GB. The generated code outperforms that produced by the state-of-the-art TVM compiler.

The talk is an extended version of our POPL'24 presentation.


Zoom link: Meeting ID: 861 8632 5698 
Passcode: 803628


Can changes in the computational stack affect correctness of Deep Learning Models? (06 February, 2024)

Speaker: Dr Ajitha Rajan


Next Systems Seminar is delivered by Dr Ajitha Rajan from the School of Informatics at the University of Edinburgh. 

Do join us in-person in SAWB422 or remotely on Zoom. 

Please find the details below:

Title: Can changes in the computational stack affect correctness of Deep Learning Models?

Abstract: It is well understood that deep learning models can be sensitive to small perturbations in input data and model architecture. There has been significant effort in making models more robust against these data and model perturbations. However, the effect of changes in the computational stack - deep learning frameworks, compilers, optimisations, hardware devices, during model deployment is not well understood. This talk will report on our testing and fault localisation research studying and fixing the effects of changes within the computational stack. We focus in particular on deep learning frameworks, as we found changing the framework during deployment can affect up to 72% of the output labels.

Bio: Dr.Ajitha Rajan is a Reader in the School of Informatics at the University of Edinburgh, where she started in 2013. She is a Royal Society Industry Fellow. Dr. Rajan's research interests are in the area of software testing, verification, robustness and interpretability of artificial intelligence applied to avionics, automotive, embedded systems, blockchains and medical diagnostics. Her work in her Royal Society Industry Fellowship focuses on testing correctness of AI algorithms in self-driving cars. Dr. Rajan has been awarded grants from EPSRC, Royal Society, H2020, Facebook, GCHQ, Huawei, and SICSA.


Zoom Link: https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09

Meeting ID: 861 8632 5698

Passcode: 803628


Locally Predicting Task Runtimes Before Running Scientific Workflows on Heterogeneous Compute Infrastructure (30 January, 2024)

Speaker: Lauritz Thamsen

The next systems seminar is delivered by Lauritz Thamsen.

Please find the details below:

Title: Locally Predicting Task Runtimes Before Running Scientific Workflows on Heterogeneous Compute Infrastructure
 
Abstract: Many resource management techniques for task scheduling, energy and carbon efficiency, and cost optimization in workflows rely on apriori task runtime knowledge. Yet, building runtime prediction models on historical data is often not feasible in practice as applications, data, and infrastructure change. Online methods, on the other hand, which estimate task runtimes on specific machines while the workflow is running, have to cope with a lack of measurements during start-up. Frequently, scientific workflows are executed on heterogeneous clusters consisting of machines with different CPU, I/O, and memory configurations, further complicating predicting runtimes due to different task runtimes on different machine types.
 
In this talk, I will present Lotaru, a method for locally predicting the runtimes of scientific workflow tasks before they are executed on heterogeneous compute clusters. Crucially, our method does not rely on historical data and copes with a lack of training data during the start-up. To this end, we use microbenchmarks, reduce the input data to efficiently profile workflow tasks locally, and predict a task’s runtime with a Bayesian linear regression based on the gathered data points from the local workflow profiling and the microbenchmarks. Our evaluation with five real-world scientific workflows shows that Lotaru outperforms state-of-the-art runtime prediction methods. In a second set of experiments, the prediction performance of our method, using the predicted runtimes for advanced workflow scheduling, carbon-aware time shifting, and cloud cost prediction, enables results close to those achieved with perfect prior knowledge of task runtimes.



Zoom link: https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09
Meeting ID: 861 8632 5698 
Passcode: 803628


Systems Seminar — Sizzler: PLC vulnerability discovery framework (16 January, 2024)

Speaker: Kai Feng

 
Abstract: 
Sizzler is a PLC vulnerability discovery framework underpinned by a novel mutation-based fuzzing strategy instrumented over SeqGAN and PLC firmware emulation setup approach. Sizzler achieves the translation of PLC LDs into C code, which execute on representative MCUs such as to emulate as realistically as possible a variety of PLC firmware environments across 30 PLC applications. Moreover, the optimal synergy of SeqGAN model with a havoc-based mutation strategy for fuzzing through Sizzler demonstrates high efficiency on detecting new and deeper code paths that relate to an increase of discovering otherwise unseen PLC code vulnerabilities. In parallel, Sizzler is also successfully deployed and assessed within a wider embedded systems dataset associated to non-PLC applications indicating its superiority over commonly used fuzzing schemes.
 
Zoom link: https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09
Meeting ID: 861 8632 5698 
Passcode: 803628


Decoding the IETF (09 January, 2024)

Speaker: Colin Perkins

This semester's first Systems Seminar talk is delivered by Colin Perkins. Join us to see how the IETF works!

Please find the detail below:

Abstract:
The Internet Engineering Task Force (IETF) is the premier technical standards development organisation for the Internet. In this talk, I'll describe the goals and operation of the IETF, review the standards development process, and discuss the evolution of the organisation and its participants over time. I'll conclude with some reflections on lessons learned from 25 years of work in Internet standards.

Zoom link: https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09
Meeting ID: 861 8632 5698 
Passcode: 803628

 


Continual learning in sensor-based human activity recognition (12 December, 2023)

Speaker: Dr Juan Ye (Erica)

This systems seminar is delivered by Dr Juan Ye (Erica) from University of St Andrews, School of Computer Science.

She will be delivering the talk in-person. 

Please find the abstract and bio below:

Abstract: Human activity recognition (HAR) is a key enabler for many applications in healthcare, factory automation, and smart home. It detects and predicts human behaviours or daily activities via a range of wearable sensors or ambient sensors embedded in an environment. As more and more HAR applications are deployed in the real-world environments, there is a pressing need for the ability of continually and incrementally learning new activities over time without retraining the HAR model. In this talk, we will present our recent progress in developing various continual learning techniques for HAR, including regularisation, generative rehearsal, and dynamic architecture techniques. We will summarise with what we have learnt from these projects and discuss future directions. 

Bio: I am a Reader in the School of Computer Science at the University of St Andrews. My research interests centre around adaptive pervasive systems, specialising in sensor-based human activity recognition, sensor fusion, context awareness, ontologies, and uncertainty reasoning. I have a PhD degree in computer science from University College Dublin, Ireland and a BSc and MSc degree from Wuhan University, China.

Zoom link: https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09
Meeting ID: 861 8632 5698 
Passcode: 803628


Truth or Dare? Attacking the AI (28 November, 2023)

Speaker: Dr Nick Pitropakis

This Systems Seminar is delivered by Dr Nick Pitropakis from Edinburgh Napier University 

Abstract:

Many machine learning methodologies typically function under the assumption of a benign environment. However, this assumption is not always valid, as adversaries may find it advantageous to maliciously tamper with either the training data (poisoning attacks) or the test data (evasion attacks). Given the increasing prevalence of machine learning applications in society, such attacks can have devastating consequences. Consequently, there is a pressing need to enhance the security of machine learning to ensure its safe and reliable adoption in adversarial scenarios.

Bio: 

Nick Pitropakis is an Associate Professor of cybersecurity at the School of Computing of Edinburgh Napier University, and a Fellow of the HEA. He is also a core member of the Blockpass Identity Lab. Dr Pitropakis has a strong research background in attacks against machine learning. His current research interests include adversarial machine learning, trust and privacy using distributed ledger technology, advanced cyber attack attribution, and data science applied to cyber security and IoT device security. Dr Pitropakis is leading the integrated apprenticeship scheme BSc Cyber Security, which is the first in the UK to receive full NCSC accreditation. He is teaching Cyber-related graduate apprenticeship degrees, both running in Scotland and England. He is also the external examiner of The American College in Greece (ACG), covering the BSc (Hons) Information Technology and BSc (Hons) Cyber Security and Networking programmes provided by The Open University, and the Lead External Examiner for MSc Cyber Security (Newcastle and London campuses) of Northumbria University. Dr Pitropakis is currently leading the Horizon Europe project Trust and Privacy-Preserving Computing Platform for Crossborder Federation of Personal Data (TRUSTEE).

Please find the zoom details below.
Zoom link: 
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09
Meeting ID: 861 8632 5698
Passcode: 803628


Unlocking the Power of Data-Centric Acceleration for Modern Applications (07 November, 2023)

Speaker: Dr Haiyu Mao

This week's systems seminar will be delivered by Dr Haiyu Mao visiting from ETH Zurich. Join us in-person or online!
 
Abstract: In today's digital landscape, the exponential growth of data has become the driving force behind cutting-edge applications, such as genome analysis and machine learning applications, revolutionizing our approach to healthcare and overall living quality. However, this unprecedented deluge of data poses a formidable challenge to traditional von Neumann computer architectures. The inefficiencies arising from the constant data movement between processors and memory consume a substantial portion of both execution time and energy when running modern applications on conventional von Neumann computers. To reduce this significant data movement, data-centric architectures, particularly processing-in-memory accelerators, emerge as a promising solution by enabling the processing of data directly where it resides. Nonetheless, most existing data-centric architectures primarily focus on accelerating specific arithmetic operations, inadvertently leaving a substantial gap between the architectural enhancements and the holistic needs of modern applications. Concurrently, conventional software optimizations often treat the architecture as a black box, which inherently limits the potential acceleration of applications.
 
This talk seeks to bridge the gaps between modern applications and data-centric architectures and revolutionize the landscape of data-centric acceleration for two vital categories of modern applications: genome analysis and machine learning. Firstly, this talk offers a comprehensive analysis of the pressing challenges within state-of-the-art genome analysis pipelines and introduces an innovative end-to-end data-centric acceleration approach achieved through seamless software-and-hardware co-design. Secondly, this talk illuminates the path to closing the gap between data-centric accelerators and the execution of real-world applications by presenting a compelling case study centered on a crucial machine learning application based on generative adversarial networks. Furthermore, this talk delves into the intricate challenges of data-centric acceleration for modern applications and explores potential solutions to surmount these obstacles, paving the way for a future where data-centric acceleration seamlessly integrates with the ever-evolving landscape of advanced applications.
 
Speaker: Dr. Haiyu Mao is a postdoctoral researcher in the SAFARI Research group led by Prof. Onur Mutlu at ETH Zurich, Switzerland. In July 2020, she received her Ph.D. degree in computer science from Tsinghua University, China. Her research interests intersect between computer architecture, processing in memory, bioinformatics, machine learning accelerators, non-volatile memory, and secure memory.  Visit Haiyu’s personal website for more info: .
 
Please find the zoom details below.
Zoom link: 
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09
Meeting ID: 861 8632 5698
Passcode: 803628


Coherence Attacks and Defenses in 2.5D Integrated Systems (06 November, 2023)

Speaker: Prof. Paul Gratz


This special additional Systems Seminar features Professor Paul Gratz, visiting from Texas A&M University. Do come join us in-person or online! 

Abstract: Industry is moving towards large-scale hardware systems which bundle processor cores, memories, accelerators, etc. via 2.5D integration.  These components are fabricated separately as chiplets and then integrated using an interconnect carrier, i.e., an interposer. This new design style is beneficial in terms of yield and economies of scale, as chiplets may come from various vendors and are relatively easy to integrate into one larger sophisticated system. However, the benefits of this approach come at the cost of new security and integrity challenges, especially when integrating chiplets that come from not fully trusted, third-party vendors.

In this talk, I explore these challenges for modern interposer-based systems of cache-coherent, multi-core chiplets. First, I will present a new form of coherence-oriented hardware Trojan attacks, that pose a significant threat to chiplet-based designs and demonstrate how these basic attacks can be orchestrated to pose a significant threat to interposer-based systems. Second, I will show our proposal for a novel scheme using an active interposer as a generic, secure-by-construction platform that forms a physical root of trust for modern 2.5D systems. The implementation of our scheme is confined to the interposer, resulting in little cost and leaving the chiplets and coherence system untouched.  I will show that our scheme prevents a range of coherence attacks with low overheads on system performance, ~4%.  Overheads reduce as workloads increase, ensuring the scheme's scalability.

Bio: Paul V. Gratz is a Professor in the department of Electrical and Computer Engineering at Texas A&M University.  His research interests include efficient and reliable design in the context of high performance computer architecture, processor memory systems and on-chip interconnection networks.  He received his B.S. and M.S. degrees in Electrical Engineering from The University of Florida in 1994 and 1997 respectively.  From 1997 to 2002 he was a design engineer with Intel Corporation.  He received his Ph.D. degree in Electrical and Computer Engineering from the University of Texas at Austin in 2008.  His paper, "Synchronized Progress in Interconnection Networks (SPIN) : A New Theory for Deadlock Freedom," was selected as a Top Pick from the architecture conferences in 2018 by IEEE Micro. His papers "Path Confidence based Lookahead Prefetching" and "B-Fetch: Branch Prediction Directed Prefetching for Chip-Multiprocessors" were nominated for best papers at MICRO '16 and MICRO '14 respectively.  At ASPLOS '09, Dr. Gratz received a best paper award for "An Evaluation of the TRIPS Computer System."  In 2016 he received the "Distinguished Achievement Award in Teaching – College Level" from the Texas A&M Association of Former Students and in 2017 he received the "Excellence Award in Teaching, 2017" from the Texas A&M College of Engineering.

Please find the zoom details below.
Zoom link: 
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09
Meeting ID: 861 8632 5698
Passcode: 803628

 


On the (De)centralisation of Proof-of-Work Blockchain Protocols (31 October, 2023)

Speaker: Thomas Zacharias

Abstract: This talk is based on my most recent published work [1], along with a gentle introduction to the concept of Proof-of-Work blockchain protocols. 
One of the most important features of these protocols is decentralisation, as their main contribution is that they formulate a distributed ledger that will be maintained and extended without the need of a trusted party. Nonetheless, Bitcoin, the most prominent Proof-of-Work blockchain system, has been criticised for its tendency to centralisation, as very few pools control the majority of the hashing power. Pass et al. proposed FruitChain [ACM PODC 17] and claimed that this blockchain protocol mitigates the formation of pools by reducing the variance of the rewards in the same way as mining pools, but in a fully decentralized fashion. Many follow up papers consider that the problem of centralisation in Proof-of-Work blockchain systems can be solved via lower rewards' variance, and that in FruitChain the formation of pools is unnecessary.
Contrary to the common perception, in [1], we prove that lower variance of the rewards does not eliminate the tendency of the PoW blockchain protocols to centralisation; miners have also other incentives to create large pools, and specifically to share the cost of creating the instance they need to solve the PoW puzzle. Thus, the existence of a Proof-of-Work protocol that provably provides incentives to the participants to act in a decentralised manner remains an open question. 
 
[1]. Aikaterini-Panagiota Stouka and Thomas Zacharias. “On the (De)centralization of FruitChains”. In IEEE Computer Security Foundations Symposium (CSF),  2023.
 
The seminar will be delivered hybrid both via zoom and in-person at SAWB422
 
Please find the zoom details below.
Zoom link: 
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09
Meeting ID: 861 8632 5698
Passcode: 803628


Machine Learning Systems: Opportunities in Partitioning and Pruning (10 October, 2023)

Speaker: Prof. Blesson Varghese

Abstract

My last (virtual) talk at Glasgow CS over a year ago was on our experience of making machine learning (ML) work for the edge focusing on optimising the computations within ML models. This talk will follow the same genre and I will present two techniques we have developed towards making machine learning more feasible on edge systems. The first is partitioning ML models, and this time I will focus on optimising communication bottlenecks. The second is pruning and I will present our work that builds on the Lottery Ticket Hypothesis to create compressed ML models suited for on-demand edge deployments.


 

The seminar will be delivered hybrid both via zoom and in-person at SAWB422

 

Prof. Blesson Varghese will be presenting in-person!

 

Please find the zoom details below.

 

Zoom link: 
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09
Meeting ID: 861 8632 5698
Passcode: 803628

 


"Morello MicroPython: A Python Interpreter for CHERI" and "Exploring Neural Network Model Composition for Environment-Aware Federated Learning" (03 October, 2023)

Speaker: Duncan Lowther and Cocoa Xu

This week's systems seminar consists of two talks delivered by Duncan Lowther and Cocoa Xu.

Please find the details below:

Title: Morello MicroPython: A Python Interpreter for CHERI
Speaker: Duncan Lowther
 
Abstract: Arm Morello is a prototype system that supports CHERI hardware capabilities for improving runtime security. As Morello becomes more widely available, there is a growing effort to port open source code projects to this novel platform. Although high-level applications generally need minimal code refactoring for CHERI compatibility, low-level systems code bases require significant modification to comply with the stringent memory safety constraints that are dynamically enforced by Morello. In this paper, we describe our work on porting the MicroPython interpreter to Morello with the CheriBSD OS. Our key contribution is to present a set of generic lessons for adapting managed runtime execution environments to CHERI, including (1) a characterization of necessary source code changes, (2) an evaluation of runtime performance of the interpreter on Morello, and (3) a demonstration of pragmatic memory safety bug detection. Although MicroPython is a lightweight interpreter, mostly written in C, we believe that the changes we have implemented and the lessons we have learned are more widely applicable. To the best of our knowledge, this is the first published description of meaningful experience for scripting language runtime engineering with CHERI and Morello.
 


Title: Exploring Neural Network Model Composition for Environment-Aware Federated Learning.
Speaker: Cocoa Xu
 
Abstract: This work explores how to discover and find environments based on data available to the device. Previous work explored the utilization of environmental information collected from on-device sensors for environment-aware neural network model composition in a federated learning system. Input samples that are in the same environment defined by a set of environmental tags will be aggregated together and contribute to the corresponding environment-aware model. This work proposes a generic workflow that helps users to discover and find different environments and has some preliminary experiments that try to automate this process.
 

The seminar will be delivered hybrid both via zoom and in-person at SAWB422
 
Please find the zoom details below.
Zoom link: 
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09
Meeting ID: 861 8632 5698
Passcode: 803628


Systems Update plus Conservative Garbage Collection (26 September, 2023)

Speaker: Jeremy Singer

Abstract: First, I will give a brief update about our Systems Section as we start a new semester. Then I will spend some time talking about conservative garbage collection algorithms and their worst-case behaviour (joint work with Dejice Jacob).

Zoom link: 
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09
Meeting ID: 861 8632 5698
Passcode: 803628


Marrying up Deep Learning and Random Test Case Generation for Software Bug Detection (19 September, 2023)

Speaker: Prof. Zheng Wang

Abstract: Deep learning (DL) techniques have emerged as a powerful tool for constructing sophisticated models to identify software bugs and vulnerabilities. While promising, several challenges must be addressed to make DL-based bug detection practical for real-world scenarios. In this talk, I will discuss some of the research conducted by my group in leveraging deep learning and fuzzing (i.e., random test case generation) techniques to detect bugs in black-box compilers and source code. We have successfully applied our techniques to detect bugs and security flaws in Javascript compilers and C programs. Our approach has uncovered over 160 new compiler bugs across various Javascript engines, including those used in Apple Safari, Google Chrome, Microsoft Edge, and Firefox. Of these bugs, 150 have been verified, and the developers have already fixed 130 as a matter of urgency. When applying our techniques to 20 open-source projects using 200 hours of automated test runs, we discovered 53 new bugs, resulting in 30 unique CVE (Common Vulnerabilities and Exposures) IDs assigned.

Speaker Bio: Zheng Wang is a Professor of Intelligent Software Technology at the School of Computing at the University of Leeds. He works at the intersection of machine learning and software techniques and is known for his work in incorporating machine learning into compilation technology. He has published over 150 papers and received four best paper awards and four HiPEAC paper awards. His research has been successfully transferred into various industry settings and the open-source community.

Zoom link: 
https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09
Meeting ID: 861 8632 5698
Passcode: 803628


Compiler Discovered Dynamic Scheduling of Irregular Code in High-Level Synthesis (29 August, 2023)

Speaker: Robert Szafarczyk

Abstract: High-level synthesis (HLS) compilers transform code written in a high-level software language, like C++, into a hardware description of a custom architecture, promising performance and efficiency improvements over off-the-shelf von Neumann architectures. Operation scheduling, the mapping of operation execution to clock cycles, is a central problem in any HLS compiler. Commercial HLS compilers produce a static schedule at compile time, failing to adapt to unpredictable runtime conditions. Recent academic HLS compilers have explored dataflow scheduling of operations at runtime to address this shortcoming, but they produce hardware that uses more resources and achieves lower operating frequencies than static HLS compilers. We show how existing static HLS compilers can be extended to support fine grained dynamic scheduling of unpredictable parts of a code, with minimal impact on the resource usage and operating frequency of the final circuit.

This will be streamed via zoom.

Zoom link: https://uofglasgow.zoom.us/j/86186325698?pwd=STNMZ2Y5a1lwMzEvcWowYTFSSjJ1QT09

Meeting ID: 861 8632 5698
Passcode: 803628

 


Systems/PLUG joint seminar — OptiTrust: an Interactive Framework for Source-to-Source Transformations (27 June, 2023)

Speaker: Thomas Koehler

This is a joint seminar between Systems section and PLUG at the School of Computing Science. 

Please find the detail below:

Abstract: We present OptiTrust, an interactive framework for optimizing 
general-purpose C code via series of programmer-guided, source-to-source 
transformations. Optimization steps are described in transformation 
scripts, expressed as OCaml programs. At every step, the programmer may 
interactively visualize the effect of the transformation as the 
difference between two pieces of human-readable C code. OptiTrust has 
been previously employed to optimize numerical simulation code. In this 
work, we showcase how to use OptiTrust to optimize matrix 
multiplication. We compare against TVM, which also relies on programmer 
guidance, but which restricts the input language and lacks easily 
readable feedback.

Zoom link: https://uofglasgow.zoom.us/j/86482689868?pwd=YW9jUEdXNXRaUHFPbWdTNGZYY1Rxdz09

Meeting ID: 864 8268 9868
Passcode: 576572


Exploring Neural Network Model Composition for Environment-Aware Federated Learning (20 June, 2023)

Speaker: Cocoa Xu

Second Part of the Systems Seminar on 20/06/2023 is delivered by Cocoa Xu. 

Title: Exploring Neural Network Model Composition for Environment-Aware Federated Learning.

Abstract: This work explores how to discover and find environments based on data available to the device. Previous work explored the utilization of environmental information collected from on-device sensors for environment-aware neural network model composition in a federated learning system. Input samples that are in the same environment defined by a set of environmental tags will be aggregated together and contribute to the corresponding environment-aware model. This work proposes a generic workflow that helps users to discover and find different environments and has some preliminary experiments that try to automate this process.

Zoom Link:
https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

Meeting ID: 873 7115 8139
Passcode: 212149

 


Mobility multihoming duality for the Internet Protocol (20 June, 2023)

Speaker: Ryo Yanagida

The systems seminar on 20/06/2023 will be delivered by Ryo Yanagida.

Abstract: In the current Internet, mobile devices with multiple connectivity are becoming increasingly common; however, the Internet protocol itself has not evolved accordingly. Instead, add-on mechanisms have emerged, but they do not integrate well. Currently, the user suffers from disruption to communication on the end-host as the physical network connectivity changes. This is because the IP address changes when the point of attachment changes, breaking the transport layer end-to-end state. Furthermore, while a device can be connected to multiple networks simultaneously, the use of IP addresses prevents end-hosts from leveraging multiple network interfaces — a feature known as host multihoming, which can potentially improve the throughput or reliability. While solutions exist separately for mobility and multihoming, it is not possible to use them as a duality solution for the end-host.
This talk will present the overview of ILNPv6, its extension, and evaluations done with the extended ILNPv6 implementation on Linux kernel. 

Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

 

Meeting ID: 873 7115 8139

Passcode: 212149


Four Years of GUSS: Successes and Lessons Learned (13 June, 2023)

Speaker: Tim Storer

This systems seminar is delivered by Tim Storer. Tim will talk about the very unique journey of the GUSS. 

Please see the details below:

Title: Four Years of GUSS: Successes and Lessons Learned

Abstract: The School of Computing Science created the Glasgow University Software Service in the Summer of 2019. The service began life as a school summer internship, with no other funding, developers or projects. Since then we’ve grown enormously and continue to do so, offering a range of software based services across the University and beyond.  In May 2023 we have a management team of 6 with more than 30 active developers and UX designers and around 15 active projects.  I’ll talk about the history of GUSS and how we’ve grown over time, noting the numerous successes as well as the lessons we’ve learned along the way.

Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09
Meeting ID: 873 7115 8139
Passcode: 212149


Systems Modelling with Bigraphs (06 June, 2023)

Speaker: Blair Archibald

Abstract: Bigraphs are an expressive modelling formalism, first introduced by Robin Milner, to model systems with strong notions of space and mobility. In this talk I will informally (no maths required!) introduce bigraphs---including their diagrammatic notation (really, no maths!)---and give a high-level overview of some of applications we have used them for before, some of the research and development undertaken at Glasgow, and some hints for where we might take bigraphs in the future.

 

Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09
Meeting ID: 873 7115 8139
Passcode: 212149


Tracking IoT P2P Botnet Loaders in the Wild (16 May, 2023)

Speaker: Hatem Aied S Almazarqi

Hatem will be presenting a talk remotely on "Tracking IoT P2P Botnet Loaders in the Wild".

This will be a hybrid event hosted at SAWB422.

Please find the abstract below:

Abstract: Evidently, centralised botnets are nowadays considered as easy targets for take-down efforts by law enforcement and computer security researchers. Hence, malicious actors transitioned towards the implementation of Peer-to-Peer (P2P) IoT botnets such to solidify their
infrastructures, avoid single points of failure and further evade back tracking. Consequently, due to the highly distributed persona of modern P2P botnets, the detection of critical nodes to aid for the effective capturing of emerging threat vectors in such setups evolved into a challenging task. In this work, we conduct a novel 24-month longitudinal study based on real Internet measurements from globally distributed honeypots focusing on propagation trends of P2P IoT botnets. In order to achieve this, we develop graph-based centrality metrics to attribute AS-level connectivity characteristics to botnet and malware propagation as well as relating AS-level tolerance for botnet malware hosts we refer to as loaders. In general, we argue that the proposed methodology and outcomes of the herein study, can significantly benefit security experts and network operators towards the design of mitigation measures against present and future P2P botnets.
 
Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09
Meeting ID: 873 7115 8139
Passcode: 212149


The climate cost of the AI revolution (02 May, 2023)

Speaker: Wim Vanderbauwhede

Summary: ChatGPT and other AI applications such as Midjourney have pushed "Artificial Intelligence" high on the hype cycle. In this talk, I focus specifically on the energy cost of training and using applications like ChatGPT, what their widespread adoption could mean for global CO₂ emissions, and what we could do to limit these emissions.

Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09
Meeting ID: 873 7115 8139
Passcode: 212149


Cyber Security and Human Reliability Analysis (25 April, 2023)

Speaker: Ying He

Ying is a GLASS alumnus now working at Nottingham. He will join our seminar next week remotely and talk about his recent cyber security work. 

Title: Cyber Security and Human Reliability Analysis

Abstract:

Organisations continue to suffer information security incidents and breaches as a result of human error even though humans are recognised as the weakest link with regard to information security. Organisations have realised the importance of human factor and but have not implemented a systematic approach, such as Human Reliability Analysis (HRA) which are used within high reliability sectors such as rail, aviation and energy. The objectives of our research are to define a human error related information security incident and create the novel HEART of Information Security (HEART-IS) technique which is an adaptation of the Human Error Assessment and Reduction Technique (HEART). We conducted case studies within a private sector and a pubic sector organisation using HEART-IS to establish if HRA is applicable to information security. We found that HEART-IS is applicable to the information security field with some minor amendments to the terminology. The mapping of information security incident causes to the HEART Error Producing Conditions (EPC) was successful but the in-built HEART human error probability calculations did not match the actual volumes of reported human error related incidents.

Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

Meeting ID: 873 7115 8139

Passcode: 212149


Systems Section MSci Presentation Part 2 (22 March, 2023)

Speaker: Systems Section MSci Students

This week’s Systems Seminars are special! We have a series of talks from MSci students in Systems Section two days in a row.

This is the part 2 of the 2 sets of presentations 

Wednesday 22/03/2023               1330–1515

Ethan Ingraham (Supervisor: Yehia Elkhatib)

Borislav Kratchanov (Supervisor: Blair Archibald) —  “Compilation as a Bigraphical Reactive System”

Kshitiz Bisht (Supervisor Nikos Ntarmos)

Inesh Bose (Supervisor: Tim Storer)"Understanding Developer Experience & Productivity with a Holistic Dashboard"

 


Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

Meeting ID: 873 7115 8139

Passcode: 212149


Systems Section MSci Presentation Part 1 (21 March, 2023)

Speaker: Systems Section MSci Students

This week’s Systems Seminars are special! We have a series of talks from MSci students in Systems Section two days in a row.

This is the part 1 of the 2 sets of presentations 

Tuesday 21/03/2023                      1400–1545

Peter Dodd (Supervisor: Jose Cano Reys)

Mairi Sillars Moya (Supervisor: Colin Perkins) “Parsing State Machine Models from Standards Documents”

Elizabeth Boswell (Supervisor: Colin Perkins) — “Analysing NAT64 Characteristics with RIPE Atlas”

Jude Campbell (Supervisor:Dimitris Pezaros)

Jack Spreckley (Supervisor: Lito Michala)


Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

Meeting ID: 873 7115 8139

Passcode: 212149


A Middleware for Automatic Composition and Mediation in IoT Systems (14 March, 2023)

Speaker: Yehia Elkhatib

Abstract: In this talk, I will present Hetero-Genius, a middleware architecture that enables construction and mediation in Internet of Things (IoT) systems. IoT systems are deployed across physical spaces such as urban parks, residential areas, and highways. The services provided by such IoT deployments are constrained to specific devices and deployment contexts. While existing interoperability solutions enable the “design time” development and deployment of IoT systems, it is often essential to dynamically compose systems that consist of other “small scale” IoT systems. To achieve this, post-deployment composition is needed, i.e., runtime composition of diverse IoT devices and capabilities. Hetero-Genius supports system and service discoverability, as well as automatic composability. We demonstrate this using a real-world Internet of Vehicles (IoV) scenario, where developers can save up to 47% of their time when using Hetero-Genius as well as improve code correctness by 55% on average.

Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

Meeting ID: 873 7115 8139

Passcode: 212149


Systems Seminar with external speaker — Roland Bless (28 February, 2023)

Speaker: Dr Roland Bless

Roland is an associate professor at KIT <https://telematics.tm.kit.edu/english/staff_bless.php>. He will attend in person next week and would be staying overnight.

Roland will be delivering two talks. Please find the titles and abstracts below:

1) "KIRA – Scalable Routing for Autonomous Control Planes"
==========================================================
This talk presents KIRA, new, highly scalable routing architecture that was specifically designed for being used in (autonomous) control planes (in-band and/or out-of-band). It aims at providing highly robust connectivity, it is ID-based, zero-touch and scales to large networks consisting of 100,000s of nodes. In contrast to traditional data plane routing protocols, prioritizes connectivity over route efficiency. It performs well in various different topologies (sparse, dense, random, regular etc.), supports link weights and multi-path routing as well and shows fast convergence even in drastic failure scenarios. Moreover, it is loop-free, route flapping-free, and uses an approach similar to label switching to forward the control plane packets efficiently. Furthermore, it optionally provides a built-in DHT (key-value store) and a highly efficient topology discovery mechanism.

2) "On Rate-based vs. Window-based Congestion Controls"
=======================================================
This talk discusses the fundamental difference between rate-based and window-based congestion controls. The recent note on "Deprecating The TCP Macroscopic Model" (https://dl.acm.org/doi/10.1145/3371934.3371956) has a point that the ACK clocking mechanism is often distorted and unreliable, but goes too far that congestion window-based approaches should not be considered any more as useful ("we see that the era of TCP dynamics built upon self-clocked, window-based congestion control is coming to a close."). This talks tries to shed light on the questions "What are the effects of the sheer existence of a congestion window?" and "What are pitfalls of rate based and congestion window-based approaches?" A useful direction seems to be a careful combination of both methods.

Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

Meeting ID: 873 7115 8139

Passcode: 212149


A novel threat modelling technique for GDPR-compliance (21 February, 2023)

Speaker: Nguyen Truong

Abstract:

In recent years, data-driven applications are increasingly being deployed in all aspects of life including smart homes, smart cities, healthcare, and medical services. In such applications, Artificial Intelligence (AI) is profoundly employed in which personal data is collected and aggregated from heterogeneous sources before being processed using "black-box" algorithms in opaque centralised servers. As a consequence, preserving the data privacy and security of these applications is of paramount importance. Since May 2018, the new data protection legislation in EU member states and the UK, namely the General Data Protection Regulations, has come into force. The GDPR establishes heavy punishment for non-compliance as failing to comply with the GDPR can be penalised by both financial fine and reprimand, ban or suspension of the violator's business. This has called for a critical need for any applications and/or services processing personal data to have modelling tools for analysing data privacy threats and analysing compliance with sophisticated GDPR requirements.

 

Modelling techniques for detecting potential threats and specifying countermeasures to mitigate the vulnerabilities play a significant role in securing personal data from a variety of data breaches and privacy attacks. Indeed, numerous threat modelling techniques have been proposed in the literature such as STRIDE, PASTA, and LINDDUN but they are only focusing on modelling security threats (STRIDE, PASTA ) and privacy threats based on the security threats (LINDDUN). None of them is sufficient to model the privacy threats, particularly the GDPR-compliance, in a complex system and/or application such as autonomous systems and heath-care services - in which a large amount of heterogeneous personal data is collected from various sources, then processed, manipulated, and shared with numerous third parties. Our research work aims to develop a novel threat modelling technique taking GDPR requirements as the baseline to address the risks of compliance for telehealth systems, and as a result, mitigate data privacy threats in the system. For this purpose, we have been developing the proposed GDPR-compliance threat modelling technique by conducting the following items:

 

  1. New System Modelling: An autonomous system would be modelled utilising STRIDE Data Flow Diagram integrated with new concepts defined in the GDPR requirements.
  2. A knowledge base for GDPR-compliance threats: We are building a knowledge base for the compliance threats and integrating it with the existing knowledge base of the security and privacy threats in STRIDE.
  3. Inference Engines for GDPR-compliance: We aim to develop a novel inference algorithm for reasoning the GDPR-compliance threats based on the knowledge base we develop in item-2. This inference engine can then be integrated with the STRIDE Microsoft modelling tool for better user interface and convenience.

 

We have demonstrated the idea of the proposed technique using the existing tool for STRIDE developed by Microsoft in the scenario of telehealth systems, which has shown the feasibility and prospect of this novel threat modelling technique. Further research to strengthen the proposal will be followed.

 

Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

Meeting ID: 873 7115 8139

Passcode: 212149


Leveraging Low-Carbon Energy for Flexible Compute Workloads in Cloud and Edge Environments (14 February, 2023)

Speaker: Lauritz Thamsen

The growing energy demand for cloud and edge computing stands to have a considerable impact on the environment. Data centres already consume more than 1% of the globally produced energy and this share is expected to rise considerably over the next few decades when even more computing will be performed on cloud infrastructure. If nothing changes, this will result in large amounts of additional carbon emissions.

One approach to this problem is to take carbon emissions into account when scheduling and placing compute workloads, since the carbon intensity of power grids is often not constant, and some computational resources are also connected to renewable energy sources. Therefore, delay-tolerant workloads can be scheduled to cloud and edge resources based on the availability of low-carbon energy.

In this talk, I will summarize our recent work towards carbon-aware scheduling for cloud and edge workloads, looking at infrastructure equipped with grid energy or renewable energy sources. Our results indicate a significant potential to reduce emissions for large flexible compute workloads, such as large-scale batch processing and machine learning training, as long as there is flexibility as to when results are needed.

For instance, shifting workloads whose results are not needed before the next workday can already reduce emissions from grid energy by over 5%, while scheduling workloads over multiple days can lower emissions by around 20%. Moreover, load and energy forecasts can be used to run flexible workloads exclusively on renewable excess energy and spare compute capacity, driving down the operational carbon emissions of workloads substantially.

This seminar will be in person in SAW/b 422 and available on Zoom.

Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

Meeting ID: 873 7115 8139

Passcode: 212149


First Come First Served: The Impact of File Position on Code Review (07 February, 2023)

Speaker: Gül Calikli

The most popular code review tools (e.g., Gerrit and GitHub) present the files to review sorted in alphabetical order. Could this choice or, more generally, the relative position in which a file is presented bias the outcome of code reviews? We investigate this hypothesis by triangulating complementary evidence in a two-step study. First, we observe developers’ code review activity. We analyse the review comments pertaining to 219,476 Pull Requests (PRs) from 138 popular Java projects on GitHub. We found files shown earlier in a PR to receive more comments than files shown later, also when controlling for possible confounding factors: e.g., the presence of discussion threads or the lines added in a file. Second, we measure the impact of file position on defect finding in code review. Recruiting 106 participants, we conduct an online controlled experiment in which we measure participants’ performance in detecting two unrelated defects seeded into two different files. Participants are assigned to one of two treatments in which the position of the defective files is switched. For one type of defect, participants are not affected by its file’s position; for the other, they have 64% lower odds to identify it when its file is last as opposed to first. Overall, our findings provide evidence that the relative position in which files are presented has an impact on code reviews’ outcome; we discuss these results and implications for tool design and code review.

This seminar will be available online and in person. 

Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

Meeting ID: 873 7115 8139

Passcode: 212149


Inferring Human Emotions from Robot's Motion in the Tele-Operation Systems (24 January, 2023)

Speaker: Emma Li

Tele-Operation systems allow human operators to control a remote robot to interact with the local environment and execute tasks. The systems can be applied in many fields, such as healthcare, surgery, education, entertainment, etc. Detecting operator’s emotions becomes crucial the system is used for operator to conduct mission-critical tasks, such as tele-surgery, nuclear waste cleaning and remote driving. For example, when a user controls a remote robot to perform surgery with intense or extreme emotion, the user may conduct an imprecise operation and causes serious injuries to a patient. In remote driving scenario, driver can be warned when stressed and tired emotions are detected. In addition, when emotions are detected in the operation, special AI methods could be applied to assist the robot control for better performance. In this talk, I will introduce our recent work on inferring human emotions using the data collected from the motion-controlled robot arm testbed. 

Zoom Link: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

Meeting ID: 873 7115 8139

Passcode: 212149

 


Design and evaluation of IPFS: a storage layer for the decentralized web (17 January, 2023)

Speaker: Dr Ignacio Castro

Recent years have witnessed growing consolidation of web operations. For example, the majority of web traffic now originates from a few organizations, and even micro-websites often choose to host on large pre-existing cloud infrastructures. In response to this, the "Decentralized Web" attempts to distribute ownership and operation of web services more evenly. This paper describes the design and implementation of the largest and most widely used Decentralized Web platform --- the InterPlanetary File System (IPFS) --- an open-source, content-addressable peer-to-peer network that provides distributed data storage and delivery. IPFS has millions of daily content retrievals and already underpins dozens of third-party applications. This paper evaluates the performance of IPFS by introducing a set of measurement methodologies that allow us to uncover the characteristics of peers in the IPFS network. We reveal presence in more than 2700 Autonomous Systems and 152 countries, the majority of which operate outside large central cloud providers like Amazon or Azure. We further evaluate IPFS performance, showing that both publication and retrieval delays are acceptable for a wide range of use cases. Finally, we share our datasets, experiences and lessons learned.

 

Bio: Ignacio Castro is Lecturer in Data Analytics at Queen Mary University of London. He obtained his PhD while researching at the Institute IMDEA Networks (Madrid, Spain), and visiting UC Berkeley (California, USA). His work sits at the intersection between economics and computer systems. His interest spans from online social networks and moderation to the macroscopic evolution of the Internet. He has been an investigator on three major EPSRC grants that hold over £6 million in funding. His work appears in top tier journals and conferences including Web Conference, ACM SIGCOMM, ACM SIGMETRICS, ACM IMC, ICWSM, and IEEE/ACM Trans. on Networking. He also serves in multiple TPCs and organises top tier conferences including IMC, CoNEXT, and SIGCOMM.

 

This seminar will also be available on zoom: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09


Flocking to Mastodon: Tracking the Great Twitter Migration (17 January, 2023)

Speaker: Dr Gareth Tyson

On October 27, 2022, Elon Musk acquired the world's largest micro-blogging platform, Twitter. As a self-proclaimed ``free speech absolutist'', this was a controversial and highly publicised event. The acquisition led to a series of chaotic events. As a consequence, Twitter experienced a mass migration of users. One of the recipient platforms has been Mastodon, a decentralized microblogging service. This presentation will discuss our measurements of the migration.

Bio: Gareth Tyson is an Assistant Professor at Hong Kong University of Science and Technology, and a Senior Lecturer at Queen Mary University of London. He regularly publishes in venues such as SIGCOMM, SIGMETRICS, WWW, INFOCOM, CoNEXT and IMC, alongside various top-tier IEEE/ACM Transactions. Over the last 5 years, he has been awarded over £5 million in research funding and has received coverage from news outlets such as BBC, Washington Post, CNBC, New Scientist, MIT Tech Review, The Times, Slashdot, Daily Mail, Wired, Science Daily, Ars Technica, The Independent, Business Insider, The Register, as well as being interviewed on both TV and Radio. He regularly serves on numerous organising and program committee member for conferences such as ACM SIGCOMM, ACM SIGMETRICS, ACM IMC, ACM WWW, ACM CoNEXT, IEEE ICDCS and AAAI ICWSM.

Available in SAWB 422 and online: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09


Inferring Human Emotions from Robot's Motion in the Tele-Operation Systems (13 December, 2022)

Speaker: Emma Li

Tele-Operation systems allow human operators to control a remote robot to interact with the local environment and execute tasks. The systems can be applied in many fields, such as healthcare, surgery, education, entertainment, etc. Detecting operator’s emotions becomes crucial the system is used for operator to conduct mission-critical tasks, such as tele-surgery, nuclear waste cleaning and remote driving. For example, when a user controls a remote robot to perform surgery with intense or extreme emotion, the user may conduct an imprecise operation and causes serious injuries to a patient. In remote driving scenario, driver can be warned when stressed and tired emotions are detected. In addition, when emotions are detected in the operation, special AI methods could be applied to assist the robot control for better performance. In this talk, I will introduce our recent work on inferring human emotions using the data collected from the motion-controlled robot arm testbed.   

 

https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

Meeting ID: 873 7115 8139

Passcode: 212149


Transfer-Tuning: Reusing Auto-Schedules for Efficient Tensor Program Code Generation (29 November, 2022)

Speaker: Perry Gibson

Auto-scheduling for tensor programs is a process where a search algorithm automatically explores candidate schedules (program transformations) for a given program on a target hardware platform to improve its performance. However this can be a very time consuming process depending on the complexity of the tensor program and the capacity of the target device, with often many thousands of program variants being explored. To address this, in this paper we introduce the idea of transfer-tuning [1], a novel approach to identify and reuse auto-schedules between tensor programs. We demonstrate this concept using Deep Neural Networks (DNNs), taking sets of auto-schedules from pre-tuned DNNs and using them to reduce the inference time of a new DNN. We compare transfer-tuning against the state-of-the-art Ansor auto-scheduler, defining the maximum possible speedup for a given DNN model as what Ansor achieves using its recommended full tuning time. On a server-class CPU and across 11 widely used DNN models, we observe that transfer-tuning achieves up to 88.41% (49.13% on average) of this maximum speedup, while Ansor requires 6.5× more search time on average to match it. We also evaluate transfer-tuning on a constrained edge CPU and observe that the differences in search time are exacerbated, with Ansor requiring 10.8× more time on average to match transfer-tuning's speedup, which further demonstrates its value. Our code is available at https://www.github.com/gicLAB/transfer-tuning

[1] https://arxiv.org/abs/2201.05587

This talk will also be available on Zoom for those unable to attend in person. 

https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09


A Slice-Based Decentralized NFV Framework for an End-to-End QoS-Based Dynamic Resource Allocation (29 November, 2022)

Speaker: Inès Djouela

Network function virtualization concept has recently merged to solve network operators and service provider’s problems related to the non-flexibility of the traditional network and the increase of capital and operational expenditures (CAPEX and OPEX). ETSI has standardized an architectural framework to serve as a springboard for reflection to the application and the setup of the concept. However, that architecture presents many shortcomings and several challenges that have to be addressed. This talk focuses on two of those challenges, namely orchestration and resource allocation. To handle those challenges, we present a new original framework laying on the standardizes ETSI. The proposed framework aims at satisfying the subscriber’s Service Level Agreement and optimizes Telecom Service Provider fees under the scope of pay as you go concept. We highlight the three main steps of the NFV resource allocation process, namely chaining, mapping and scheduling.This talk will also be available on Zoom for those unable to attend in person. 

https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09


Applying new technologies to UK MOD vehicles with UK GVA (22 November, 2022)

Speaker: Ian James

Ian James and Natasha Dell, will be coming into the school from Thales to present on 'Applying new technologies to UK MOD vehicles with UK GVA'. 

Thales are involved in both standards and the latest technologies considered for UK MOD Vehicle Systems, primarily based on the UK Generic Vehicle Architecture (GVA). The UK GVA provides an electronic architecture platform which can be used to allow inclusion of a capability to a vehicle. This presentation looks at next generation capabilities such as improved situational awareness in an Urban Canyon and how an additional "Digital Crew" member might be able to help with workload. It looks at a Thales demonstration system built to confirm feasibility of these new capabilities.  There is also a summary look at latest technology options being considered now for inclusion in the next applied research programmes.

Embedded Networks, Open systems, AI, Machine Learning, Image Processing, Optronics, Unmanned and Manned systems, Pattern of Life, Data Fusion, Safe networked capability

Ian and Natasha will have some time available after 3pm for additional discussion and questions.  

 

This seminar will also be available on Zoom: https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09


The Remarkable Longevity of the Richards Benchmark (15 November, 2022)

Speaker: Jeremy Singer

The Richards benchmark was originally developed as a low-level, portable system performance measurement tool in 1980, yet it remains in common use today and has been translated into a wid range of programming languages.  In this talk, we explore the nature of the Richards benchmark and discuss how its properties contribute to its longevity, which may provide useful insights for contemporary benchmark designers.

 

This seminar will also be available on Zoom:

https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09


TruSDEd: Composable, Efficient, Secure XDP Service Function Chaining on Single-Board Computers (08 November, 2022)

Speaker: Kyle Simpson

IoT and sensor networks are commonplace and are often used to monitor public infrastructure, but have earned something of a reputation for being insecure. Individual devices have questionable provenance, security fixes and software updates are not always guaranteed, and ‘by-design’ security of older deployments can be an afterthought.

The TruSDEd project aims to use cheap, commodity single-board compute (SBC) devices—like Raspberry Pis and Intel NUCs—for drop-in, reconfigurable defence via ingress/egress network traffic processing. The low traffic rates of these edge networks mean these devices are well-suited to the job in theory, but state-of-the-art service chaining frameworks rely on hardware features only found in server-grade hardware. New Linux kernel features like XDP and AF_XDP—combined with memory-safe languages like Rust—offer the tools to solve this mismatch, and this talk focusses on the (ongoing) design and implementation of an efficient, secure XDP-based software dataplane built to make best use of SBC devices.

This talk will also be available on Zoom for those unable to attend in person. 

 

https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09

 


The Web We Weave: Untangling the Social Graph of the IETF (25 October, 2022)

Speaker: Stephen McQuistin

Stephen will be presenting in SWAB 422 on 'The Web We Weave: Untangling the Social Graph of the IETF'

Abstract: Protocol standards, defined by the Internet Engineering Task Force (IETF), are crucial to the successful operation of the Internet. In this talk, I’ll describe a large-scale empirical study of IETF activities, with a focus on understanding collaborative activities, and how these underpin the publication of standards documents (RFCs). Using a unique dataset of 2.4 million emails, 8,711 RFCs and 4,512 authors, I’ll show the shifts and trends within the standards development process, and how protocol complexity and time to produce standards has increased. 

This talk will also be available on Zoom for those unable to attend in person. 

https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09


Classifying the Reliability of the Microservices Architecture (18 October, 2022)

Speaker: Adrian Ramsingh

Microservices are popular for web applications as they offer better scalability and reliability than monolithic architectures. Reliability is improved by loose coupling between individual microservices. However, in production systems, some microservices are tightly coupled or chained together.

 

We classify the reliability of microservices: if a minor microservice fails then the application continues to operate; if a critical microservice fails, the entire application fails. Combining reliability (minor/critical) with the established classifications of dependence (individual/chained) and state (stateful/stateless) defines a new three-dimensional space: the Microservices Dependency State Reliability (MDSR) classification. 

 

Using three web application case studies (Hipster-Shop, Jupyter and WordPress) we identify microservice instances that exemplify the six points in MDSR. We present a prototype static analyser that can identify all six classes in Flask web applications and apply it to seven applications. We explore case study examples that exhibit either a known reliability pattern or a bad smell. 

 

We show that our prototype static analyser can identify three of six patterns/bad smells in Flask web applications. Hence MDSR provides a structured classification of microservice software with the potential to improve reliability. Finally, we evaluate the reliability implications of the different MDSR classes by running the case study applications against a fault injector.

Adrian will be presenting in SWAB 422, but you can also join via zoom:  

https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09


Systems Seminar (11 October, 2022)

Speaker: Dimitrios Pezaros

Overview and update of the Section, welcome new members, and, if time allows, talk about research too.

It will be held in SAWB 422 and online.

https://uofglasgow.zoom.us/j/87371158139?pwd=N3Y4OHZpblp1OFYzNGFOc25kT0NvQT09


SECDA: Efficient Hardware/Software Co-Design of FPGA-based DNN Accelerators for Edge Inference (21 June, 2022)

Speaker: Jude Haris

Edge computing devices inherently face tight resource constraints, which is especially apparent when deploying Deep Neural Networks (DNN) with high memory and compute demands. FPGAs are commonly available in edge devices. Since these reconfigurable circuits can achieve higher throughput and lower power consumption than general-purpose processors, they are especially well-suited for DNN acceleration. However, existing solutions for designing FPGA-based DNN accelerators for edge devices come with high development overheads, given the cost of repeated FPGA synthesis passes, and reimplementation in a Hardware Description Language (HDL) of the simulated design, and accelerator system integration.

During the presentation, we discuss SECDA, a new hardware/software co-design methodology to reduce the design time of optimized DNN inference accelerators on edge devices with FPGAs. SECDA combines cost-effective SystemC simulation with hardware execution, streamlining design space exploration and the development process via reduced design evaluation time. As a case study, we use SECDA to efficiently develop two different DNN accelerator designs on a PYNQ-Z1 board, a platform that includes an edge FPGA. This work has been published at SBAC-PAD 2021.


Dependability and Data Analytics: A Match made in the Cloud (20 June, 2022)

Speaker: Saurabh Bagchi

Abstract :

We live in a data-driven world as everyone around has been telling us for some time. Everything is generating data, in volumes and at high rates, from the sensors embedded in our physical spaces to the large number of machines in data centers which are being monitored for a wide variety of metrics. The question that we pose is: Can all this data be used for improving the dependability of cloud computing systems?

Dependability is the property that a computing system continues to provide its functionality despite the introduction of faults, either accidental faults (design defects, environmental effects, etc.) or maliciously introduced faults (security attacks, external or internal). We have been addressing the dependability challenge through large-scale data analytics applied end-to-end fromthe small (networked embedded systems, mobile and wearable devices) [e.g., CVPR-22, Eurosys-22, NeurIPS-20, Sensys-20,UsenixSec-20, NDSS-20] to the large (edge and cloud systems, distributed machine learning clusters) [e.g., OSDI-22, Sigmetrics-22, UsenixATC-21,DSN-20,UsenixATC-20]. In this talk, I will first give a high-level view of how data analytics has been brought to bear on dependability challenges, and key insights arising from work done by the technical community broadly. Then I will do a deep dive into the problem of configuring complex cloud systems to meet dependability and performance requirements, using data-driven decisions.

For the detailed part, I will show how distributed applications on the cloud can be configured for dependability and predictable performance even as the workloads are changing unpredictably. I will then discuss an exciting and emerging area of cloud computing called serverless applications on the cloud and show they can be configured for dependability and performance determinism.

Speaker :

Saurabh Bagchi is a Professor in the School of Electrical and Computer Engineering and the Department of Computer Science at Purdue University in West Lafayette, Indiana, USA. He is the founding Director of a university-wide resiliency center at Purdue called CRISP (2017-present) and leads the Army’s Assured Autonomous Innovation Institute (A2I2) at Purdue. He is a Fellow of the Institute of Engineering and Technology (IET) and the recipient of the Alexander von Humboldt Research Award (2018), an Adobe Research Award (2021, 2017), the AT&T Labs VURI Award (2016), the Google Faculty Award (2015), and the IBM Faculty Award (2014). He serves on the IEEE Computer Society Board of Governors and is a selected member of the International Federation for Information Processing (IFIP). Saurabh's research interest is in distributed systems and dependable computing. He is proudest of the 23 PhD and about 50 Masters thesis students who have graduated from his research group and who are in various stages of building wonderful careers in industry or academia. In his group, he and his students have far too much fun building and breaking real systems for the greater good. Saurabh received his MS and PhD degrees from the University of Illinois at Urbana-Champaign and his BS degree from the Indian Institute of Technology Kharagpur, all in Computer Science. He is the co-founder and CTO of a cloud computing startup, KeyByte.


Continuous Performance Testing in Virtual Time (14 June, 2022)

Speaker: Robert Chatley

Abstract :
Test-Driven Development (TDD) can give us a lot of information about functional correctness of a software system, but the way it is generally used currently cannot give much information on the performance characteristics of the implemented system. We typically do not find out about performance problems until the whole system is tested together, or worse, when it fails in production. In this talk we introduce new techniques for constructing unit tests that allow us to explore performance characteristics and detect problems before deploying our software. We can use virtual time to run performance experiments without waiting for real time to elapse, so we can get the fast feedback we are used to from the TDD cycle.
 
Speaker :
Dr Robert Chatley - Director of Software Engineering Practice, Imperial College London.  
 
After completing his PhD in software engineering, Robert spent many years working in industry as a senior engineer and a consultant before returning to university life. His work now bridges industry and academia, focussing on developing skills and knowledge in software engineers to build technical competence and improve developer productivity. His role at Imperial combines a strong focus on education with industry-focussed research. He is also director of the EdTech Lab for the Department of Computing at Imperial, leading an internal open-source community developing software supporting learning and teaching.

 


Machine Learning on the Edge - Challenges, Opportunities and Solutions (07 June, 2022)

Speaker: Blesson Varghese

Abstract

Machine learning (ML) on the edge is premised on (pre)processing data nearer to the source where it is generated rather than heavily relying on distant hyperclouds. This premise underpins many high-value and emerging distributed applications of scientific and societal relevance. However, many existing ML algorithms are not designed to run in relatively resource constrained environments. This talk will present the opportunities in offloading within the context of ML to alleviate the computational burden on frugal resources by leveraging the edge. Using federated learning as an example, the talk will highlight how the challenges of stragglers due to computational heterogeneity and of adapting to changing operational conditions are addressed.

Speaker

Blesson Varghese is a Reader in Computer Science at the University of St Andrews and directs the Edge Computing Hub (https://edgehub.co.uk/). He was the recipient of the 2021 IEEE Technical Committee on the Internet Rising Star Award for fundamental contributions to edge computing and held a Royal Society industry fellowship for exploring trust at the edge with UK’s largest telecoms and network providers, BT. His current interests are at the intersection of distributed systems and machine learning. More information is available at https://www.blessonv.com.


Wiring Circuits is easy as 0-1-Omega, or is it... (31 May, 2022)

Speaker: Jan De Muijnck-Hughes

Quantitative types allow us to reason more precisely about term usage in
our programming languages. There are, however, other styles of language
in which quantitative typing is beneficial: Hardware Description
Languages (HDLs) for one. When wiring the components together it is
important to ensure that there are no unused ports or dangling wires.
Here the notion of usage is different to that found in general-purpose
languages. Although many linearity checks are detectable using static
analysis tools such as Verilator, it is really interesting to
investigate how we can use /fancy types/ (specifically
quantitative-type-theory \& dependent types) to make wire usage an
intrinsic check within the type-system itself. With these /fancy types/
we can provide compile-time checks that all wires and ports have been used.

Past work (unpublished) has seen me develop a novel orchestration
language that uses fancy types to reason about module orchestration.
Today, however, I want talk about my work in /retrofitting/ a fancy-type
system ontop of an existing HDL.
Specifically, I have concentrated my efforts at the 'bottom' of the
synthesis chain of SystemVerilog to type their netlists, a format from
which hardware can be generated (fabless and fabbed). From this
foundation, future work will be to promote my fancy-types: up the
synthesis chain to a more comprehensive version of SystemVerilog; to new
and better HDLs; or for similar application domains such as algebraic
circuits in Zero-Knowledge proofs.

I will begin by introducing an unrestricted simply-typed netlist
language and the design issues faced when capturing SystemVerilog's
/interesting/ design choices. I will then describe how we can formally
attest to the type-safety of our type-system. Lastly, I will detail how
we can retrofit a linearly-wired type system ontop of the same syntax.


Anomaly Diagnosis in Cyber-Physical Systems (24 May, 2022)

Speaker: Marco Cook

Programmable Logic Controllers (PLCs) play a vital role in controlling cyber-physical system (CPS) processes and consequently have become a primary target for cyber attacks that aim to disrupt CPS. By contrast, with conventional networked setups, the operational and safety-critical importance of PLCs introduce challenges for CNI operators on empirically determining if an incident is a cyber-attack or a system fault as both occurrences can display similar outputs on the physical process. In this talk, I will present a novel framework for PLC anomaly diagnosis defined by a two-stage identification and classification approach based on novelty detection that we have evaluated using a representative ICS water treatment testbed.


Accelerating Quantum Circuits on FPGAs (17 May, 2022)

Speaker: Youssef Moawad

We are well in the Noisy Intermediate Scale Quantum (NISQ) era of quantum computing. With the emergence of several quantum computers with access to many tens of qubits, and hundreds of qubits being the aim of the next decade or so, the demand for the development of algorithms which can take advantage of this technology has never been higher. To facilitate this in the near-term, efficient simulators for quantum hardware have to be developed. The Quantum Circuit Model is the most used computational model for interacting with current quantum hardware. Most simulators currently developed for the purpose of simulating quantum circuits target GPUs, typically on multi-node systems with access to a large amount of distributed memory. While such systems are powerful enough to minimise the execution times of the circuits, today it is also important to consider the energy consumption of such systems. In this talk, a baseline FPGA-based architecture for the simulation of such circuits is presented and we demonstrate that, while it takes twice as long to on average for circuits to finish, it does so with far lower power consumption resulting in 3x improvement in performance per watt. A Haskell-based eDSL accompanies the architecture to facilitate the encoding and compilation of quantum circuits for the architecture. Future development directions for the FPGA architecture are mapped out with the aim of reaching a 10x improvement in performance per watt.


Reflective Metaprogramming for In-Network Computation (10 May, 2022)

Speaker: Charles Varley

As network complexity and dynamic reconfigurability become more prevalent, there is increasing scope for in-network computation. Effective orchestration of such compute tasks will require rich introspective capabilities, facilitating runtime reflection on the dynamic structure and behaviour of the overall system. This project involves an exploration of appropriate features required in a network metaprogramming framework, informed by realistic industrial use cases and more speculative indications of future trends for in-network compute.


Exploiting the Spatio-Temporal Availability of Renewable Energy at the Edge (03 May, 2022)

Speaker: Wiesner Philipp

The growing electricity demand of computing increases operational costs and will soon have a considerable impact on the environment. However, especially at the edge of the network, devices are often located close to renewable energy sources, like rooftop solar arrays in urban environments. This talk will address the current state and planned future work of my research on renewable-aware computing, particularly in the context of distributed and heterogenous edge environments. The goal is to reduce the carbon footprint of flexible edge workloads, such as federated learning training, by scheduling them at times and locations where we expect renewable excess energy to be available. To this end, we (i) combine forecasts of power generation and consumption, as well as computational load, and (ii) use mathematical optimization to determine a low-carbon scheduling plan.


Does TCP’s New Congestion Window Validation Improve HTTP Adaptive Streaming Performance? (26 April, 2022)

Speaker: Mihail Yanev

 HTTP adaptive streaming video flows exhibit on-off behaviour, with frequent idle periods, which can interact poorly with TCP’s congestion control algorithms. New congestion window validation (CWV) modifies TCP to allow senders to restart more quickly after certain idle periods. While previous work has shown that New CWV can improve transport performance for streaming video, it remains to demonstrate that this translates to improved application performance. In this talk, I will show that enabling New CWV also improves application performance.


Optimising Task Allocation for Edge Computing Micro-Clusters (15 March, 2022)

Speaker: Yousef Alhaizaey

Optimised task allocation is essential for efficient and effective edge computing; however, task allocation differs in edge computing systems compared to the powerful centralized cloud data centres, given the limited resource capacities in edge computing systems and the strict QoS requirements of many Internet of Things (IoT) applications. In this paper, we extend our previous work on optimising task allocation for edge computing microclusters by incorporating the metaheuristic Particle Swarm Optimization (PSO) to minimize the total execution time of relevant edge workloads. We present an evaluation of the metaheuristic PSO, mixed-integer programming and randomised allocation, based on the computation overhead time and the quality of the produced solutions. Our results demonstrate a clear crossover point implying that mixed-integer programming is efficient for small-scale clusters, where PSO scales better and provides reasonable solutions for larger-scale edge clusters.


Classifying the Reliability of Microservices Architectures (08 March, 2022)

Speaker: Adrian Ramsingh

Title: Classifying the Reliability of Microservices Architectures
Speaker: Adrian Ramsingh
Time: 2:00 - 3:00 pm
 
Abstract
Microservices are popular as they offer better scalability and reliability than monolithic software architectures. Reliability is improved by the loose coupling between individual microservices. However, in production systems, some microservices are tightly coupled or chained together. We classify the reliability of microservices: if a minor microservice fails then the application continues to operate; possibly with reduced performance or functionality;  if a critical microservice fails, the entire application fails. Combining reliability with the established classifications of dependence (individual/chained) and state (stateful/stateless) defines a  new three-dimensional space: the Microservices Dependency State Reliability (MDSR) classification. Using three web application case studies (Hipster-Shop, Jupyter and WordPress) we identify microservice instances that exemplify the six points in MDSR}. We demonstrate that each point in MDSR corresponds to a known reliability pattern or bad smell. Hence MDSR provides a structured classification of microservice software with the potential to improve code quality.
 


Systems Software for emerging non-traditional hardware topologies (01 March, 2022)

Speaker: Antonio Barbalace

### ATTN : Time change 11:00 - 12:00

 

### Abstract :

Today’s computer hardware is increasingly heterogeneous, including several special purpose and reconfigurable accelerators that sit along with the central processing unit (CPU). Emerging platforms go a step further including processing units (CPUs and/or accelerators), in the storage, network, and memory hierarchies (near data processing architectures). Therefore, introducing hardware topologies that didn’t exist before — non-traditional, e.g., a single computer with multiple diverse CPUs, other than accelerators.

Existent, traditional, systems software has been designed and developed with the assumption that a single computer hosts a single CPU complex. Therefore, there is one operating system running per computer, and software is compiled to run on a specific CPU complex. However, within emerging platforms this doesn’t apply anymore because every different CPU complex requires its own operating system and applications, which are not compatible between each other, making a single platform look like a distributed system – even when CPU complexes are tightly coupled. This makes programming hard and hinders all of a set of performance optimizations. Therefore, this talk argues that new systems software is needed to better support emerging non-traditional hardware topologies, and introduces new operating system and compiler design(s) to achieve easier programming, and full system performance exploitation.

 

### About the Speaker

Antonio Barbalace holds a Senior Lecturer (Associate Professor) position within the School of Informatics at the University of Edinburgh, Scotland. Before, he was an Assistant Professor in the Computer Science Department, Stevens Institute of Technology, New Jersey. Prior to that, he was a Principal Research Scientist and Manager at Huawei’s Munich Research Center (MRC). Earlier, a Research Assistant Professor, initially a Postdoc, within the Systems Software Research Group, ECE Department, at Virginia Tech, Virginia. He earned a PhD in Industrial Engineering from the University of Padova, Italy, and an MS and BS in Computer Engineering from the same University.
Antonio Barbalace’s research interests include all aspects of system software, embracing hypervisors, operating systems, runtime systems, and compilers/linkers, for emerging increasingly parallel and heterogeneous computer architectures, including pervasive shared memory, disaggregated architectures, and Near-Data Processing (NDP) platforms.

 


Heterogeneous Programming for Edge Computing (15 February, 2022)

Speaker: Paul Keir

Abstract

Heterogeneous computing hardware is increasingly available and programmable; yet questions arise on the quality and long term stability of the proprietary tools and libraries provided by individual vendors. Support for heterogeneous programming within the standard C++ library holds the promise of maximising market forces; but this hope has again receded, and will now come no sooner than 2026.

Today, Khronos SYCL provides and open, standard solution for heterogeneous programming using C++, with a range of implementations available. Yet, while designed as cross platform, the GPU market leader (NVIDIA) provides no SYCL implementation to compete with its own proprietary CUDA language. In this talk, we present a new prototype SYCL implementation which nevertheless uses a compiler and runtime provided by NVIDIA to implement SYCL as a library.

About the Speaker
Paul Keir is a lecturer and programme leader of the B.Sc. (Hons.) Computer Games Technology degree programme at the University of the West of Scotland. After a number of years in the games industry, and obtaining his doctorate at the University of Glasgow, Paul led the contribution of Codeplay Software on two EU FP7 projects: LPGPU and CARP. His research interests include compilers, programming languages and high performance computing.

 


Do Current Online Coding Tutorials Systems Address Novice Programmer Difficulties? (25 January, 2022)

Speaker: Ohud Abdullah

Online Coding Tutorials Systems provide a basis for free and open interactive programming education at scale. Such browser-based systems featuring automated feedback are increasingly popular as remote learning has become normalized. In addition, these systems facilitate practical software development experiences that form an integral part of the learning process for novice programmers. However, such systems will only be truly effective if they adequately address novice programmer learning requirements and challenges.
Therefore, our focus in this paper is to investigate whether current popular online coding tutorials systems address novice programmer difficulties or not. In this paper, we report on three studies to answer this question: In the first study, a systematic literature review identifies common programming learning challenges. In addition, from the literature, we identify a set of features that could enable richer and more effective programming learning experiences. In the second study, we identify some feature suggestions through an online survey instrument to collect learner and educator feedback about online coding platform interaction. In the third study, we analyse five online coding tutorials systems to investigate whether the current systems provide the identified features.

We find that current online coding systems provide many of the identified novice support features although some are entirely absent such as auto-completion, customized hints, visual map and quizzes. We conclude by considering the implications of these findings, emphasising the need for enhanced support for novice programmers through a more considerate design process for online coding tutorials systems.


A Large Scale Study of Behaviour Driven Development in Open Source Projects (18 January, 2022)

Speaker: Tim Storer

Behaviour driven development (BDD) is the practice of developing requirements specifications, typically in the Gherkin language, linked to executable acceptance tests. BDD has received considerable interest within the software industry.  There is extensive grey literature and some initial studies have suggested that BDD is popular in both commercial software development and open source projects.  However, our knowledge of the practice of BDD remains rather limited.  To gain an objective view of BDD in open source projects, we obtained a randomly selected sample of 1 million GitHub repositories.  The sample was filtered to identify ‘real’ software development projects that contain Gherkin feature files behaviour driven development.  We asseseds the prevalence of BDD in open source projects and then to draw conclusions about the practice of BDD within open source projects.


HIVE - Bringing Scalability to the Erlang OTP (14 December, 2021)

Speaker: Natalia Chechina

### Abstract
Erlang programming language is famous for scalability and fault-tolerance. However, it was designed for a single machine and a small group (up to 10) of geographically closely located machines. Erlang distribution has been successfully stretched for years, but there is an understanding in the community that in a distributed setting OTP doesn’t provide sufficient support to scale applications. There’s been a number of attempts to scale Erlang — including my beloved SD Erlang with its s_groups developed at Glasgow — but still, we don’t have a library or a framework that would be sufficiently widely used to be included in the OTP.

In this talk, I will discuss another attempt to bring scalability to the Erlang OTP currently developed at Erlang Solutions called HIVE, and ingredients that we hope will make it a success this time.

### About the Speaker

Natalia Chechina is a Consultant Developer at Erlang Solutions. Her research interests include distributed systems, cooperative robotics, and fault-tolerance at scale. Natalia received her PhD in 2011 from Heriot-Watt University and then worked as a postdoctoral researcher at Heriot-Watt and Glasgow Universities (2011-2017) and as a lecturer at Bournemouth University (2017-2021).
She then moved to the industry in March 2021. Natalia is a regular reviewer of international Conference proceedings and Journals and is currently the Chair of the Embedded Working group (Erlang Ecosystem Foundation) and the Steering Committee Chair of the ACM SIGPLAN Erlang Workshop.


Verifying the User of Motion Controller Robotic Arm Systems via the Robot Behaviour (07 December, 2021)

Speaker: Emma Li

Motion-controlled robotic arms allow a user to interact with a remote real world without physically reaching it. By connecting cyberspace to the physical world, such interactive teleoperations are promising to improve remote education, virtual social interactions and online participatory activities. In this work, we build up a motion-controlled robotic arm framework comprising a robotic arm end and a user end, which are connected via a network and responsible for manipulator control and motion capture respectively. To protect the system access, we propose to verify who is controlling the robotic arm by examining the robotic arm’s behaviour, which adds a second security layer in addition to the system login credentials. In this work, we have the following contributions:

1) We find that a robotic arm’s motion inherits its human controller’s behavioural biometric in interactive control scenarios.

2) We derive the unique robotic motion features to capture the user’s behavioural biometric embedded in the robot motions.

3) We develop learning-based algorithms to verify the robotic arm user. Extensive experiments show that our system achieves 94% accuracy to distinguish users while preventing user identity spoofing attacks with 95% accuracy.


Cloud Deployment Based on Service Level Requirements (30 November, 2021)

Speaker: Yehia Elkhatib

Deployment of applications in the cloud requires traversing through a market crowded with a few large and many small providers, each offering a wide range of service provisions. I will discuss SLO-ML, a cloud modelling language that provides concepts for modelling service level objectives of cloud applications. This allows adaptive deployment of an application based on specific service-level objectives of a given application. I will introduce the architectural design of SLO-ML and the associated broker and discuss results from a mixed-methods evaluation.


(1) Cmm_of_wasm: From WebAssembly to Native Code via the OCaml Backend & (2) Effect of Unequal Clustering Algorithms in WirelessHART networks. (23 November, 2021)

Speaker: (1) Simon Fowler & (2) Nouf Helal H AlHarbi

Talk 1: Cmm_of_wasm: From WebAssembly to Native Code via the OCaml Backend
Speaker: Simon Fowler
Time: 2:00-2:30 PM
 
Abstract:
WebAssembly is a platform-independent bytecode language designed to replace JavaScript as a compilation target for web applications. Many languages can now target WebAssembly, however, compiling *from* WebAssembly *to* native code makes it possible to prototype and evaluate newly proposed features such as garbage collection and effect handlers using a much more manageable codebase than production browser implementations.
In this talk, I will discuss some work I did a few years ago on implementing cmm_of_wasm, a feature-complete ahead-of-time compiler for WebAssembly, which targeted the CMM backend of the OCaml compiler.  I will give a tutorial introduction to WebAssembly, discuss the various compilation steps, and discuss challenges that arose both due to the design of WebAssembly and the choice of CMM as a compilation target. If time permits I will discuss how the current WebAssembly landscape has changed since I did this work.
 
 
Talk 2: Effect of Unequal Clustering Algorithms in WirelessHART networks.
Speaker: Nouf Helal H AlHarbi
Time: 2:30-3:00 PM
 
Abstract:
The use of Graph Routing in Wireless Highway Addressable Remote Transducer (WirelessHART) networks offers the benefit of increased reliability of communications because of path redundancy and multi-hop network paths. Nonetheless, Graph Routing in a WirelessHART network creates a hotspot challenge resulting from unbalanced energy consumption because of based on a first-path approach available to transmit packets from a source device to a final destination. This research proposes the use of unequal clustering algorithms based on Graph Routing in WirelessHART networks to alleviate the hotspot problem under energy constraints in WirelessHART networks and enhance network performance may provide a practical for the WirelessHART networks. Graph Routing is compared with pre-set and probabilistic unequal clustering algorithms in terms of energy consumption, packet delivery ratio, throughput, and average end-to-end delay. The simulation result shows that Graph Routing with probabilistic unequal clustering has improved network performance in all metrics considered, which are the important requirements of industrial wireless networks.
 


Trustworthy Cloud Computing with Hardware Security (16 November, 2021)

Speaker: Peter Pietzuch

Abstract:

An increasing number of application use cases move from on-premise deployments to public cloud environments. The security requirements placed on applications in clouds, however, are changing: while initial cloud security mechanisms focused on the strict isolation of cloud tenants, micro-service architectures and data-intensive applications, e.g. using machine learning, require the efficient yet secure sharing of data between services and components.

In this talk, I will describe our work on using hardware security primitives to bridge the tension between isolation and sharing in cloud environments. Our work on CubicleOS explores isolation: we have created a compartmentalised library OS for cloud environments that isolates
components from a monolithic kernel with minimal manual code changes. By using hardware memory tagging (Intel MPK), it becomes possible to protect components while maintaining existing kernel interfaces. Our work on Coffers explores sharing: we use hardware support for memory capabilities (CHERI) to isolate application components and provide new efficient IPC-like communication primitives for data sharing.

About the Speaker :

Peter Pietzuch is a Professor of Distributed Systems at Imperial College London, where he leads the Large-scale Data & Systems (http://lsds.doc.ic.ac.uk).


Security aspects of data plane programmability (09 November, 2021)

Speaker: Sandra Scott-Hayward

Abstract :

While software-defined networking (SDN) has revolutionised networking by making control plane programmability a reality, in recent years, there has been a renewed interest in data plane programmability. This is driven by open-source and programmable devices that offer high performance and custom packet processing. In this talk, I will discuss DNSxP, our DNS data exfiltration protection architecture that uses SDN and data plane programmability (DPP), and adversarial exploitation of P4 data planes. The adversarial exploit is an attacker modifying the network behaviour by altering the forwarding behaviour of the switches while attempting to hide their violation(s) of the network policy.

About Sandra Scott-Hayward:

Sandra Scott-Hayward is a Senior Lecturer (Associate Professor) with the School of Electronics, Electrical Engineering and Computer Science, and a Member of the Centre for Secure Information Technologies at Queen’s University Belfast (QUB). She began her career in industry and became a Chartered Engineer in 2006 having worked as a Systems Engineer and Engineering Group Leader with Airbus. Since joining academia, she has published a series of IEEE/ACM papers on security designs and solutions for softwarized networks based on her research on the development of network security architectures and security functions for emerging networks. She received Outstanding Technical Contributor and Outstanding Leadership awards from the Open Networking Foundation in 2015 and 2016, respectively, having been elected and serving as the Vice-Chair of the ONF Security Working Group from 2015 to 2017. Amongst many other service memberships, she is on the Organizing Committee of IEEE NetSoft and is an Associate Editor of IEEE Transactions on Network and Service Management. She is Director of the QUB Academic Centre of Excellence in Cyber Security Education (ACE-CSE), one of the first universities to be awarded this recognition by the U.K. National Cyber Security Centre. She has been selected as a Polymath Fellow of the Global Fellowship Initiative at the Geneva Centre for Security Policy (GCSP) from 2021 to 2023.


A paper (Vibration Edge Computing in Maritime IoT) and a workshop (on website committee tasks) (02 November, 2021)

Speaker: Lito Michala

The seminar will have two parts.
- The first part will be a traditional presentation of a recently accepted paper. IoT and the Cloud are among the most disruptive changes in the way we use data today. These changes have not significantly influenced practices in condition monitoring for shipping. This is partly due not only to the cost of continuous data transmission. Several vessels are already equipped with a network of sensors. However, continuous monitoring is often not utilised and onshore visibility is obscured. Edge computing is a promising solution but there is a challenge sustaining the required accuracy for predictive maintenance. We investigate the use of IoT systems and Edge computing, evaluating the impact of the proposed solution on the decision making process. Data from a sensor and}the NASA-IMS open repository was used to show the effectiveness of the proposed system and to evaluate it in a realistic maritime application. The results demonstrate our real-time dynamic intelligent}reduction of transmitted data volume by 10^3 without sacrificing specificity or sensitivity in decision making. The output of the Decision Support System fully corresponds to the monitored system's actual operating condition and the output when the raw data is used instead. The results demonstrate that the proposed more efficient approach is just as effective for the decision making process.


- The second part (last 20 minutes) we will do an interactive workshop on acquiring content for the website and discussion of creative content such as capturing cool pictures of your amazing projects!


(1) Improving the performance of in-network Key-Value Stores & (2) Scanning the IPv6 Space (19 October, 2021)

Speaker: (1) Stefanos Sagkriotis & (2) Vivian Band

Talk 1: Improving the performance of in-network Key-Value Stores
Speaker: Stefanos Sagkriotis
Time: 2:00-2:30 PM
Abstract:
In this talk, I’ll explain the methods and technologies that enable network devices to provide responses to key-value (KV) queries. This effectively reduces the response latency by half when compared against legacy methods and allows line-rate throughput for query responses. We’ll be discussing design decisions of state-of-the-art in-network KV platforms that introduce performance limitations. I’ll be presenting a new platform that addresses these shortcomings and achieves significant improvements over the state-of-the-art (up to orders of magnitude) in throughput, latency, and scalability.


Talk 2: Scanning the IPv6 Space
Speaker: Vivian Band
Time: 2:30-3:00 PM
Abstract:
The 128-bit IPv6 space is vast and sparsely populated. It's impossible to exhaustively scan this space, but there are some patterns and sources of information which can be used to reduce scan ranges to manageable levels. This talk will cover an overview of some space reduction strategies, research questions to consider when developing an IPv6 scanner, and some initial findings from network prefix scans.


Reducing the memory utilisation of legacy scientific code (12 October, 2021)

Speaker: Prof Wim Vanderbauwhede

We present a novel approach to aggressively reduce memory utilisation of stencil-based legacy scientific Fortran code through a program transformation that trades memory access for computation. The code is first auto-parallelised through automatic program transformation using our approach. We use an internal domain-specific functional language in the compiler for the analysis and program transformation. The key contribution of this paper is a set of type-driven rewrite rules that identify and eliminate the intermediate arrays, with proofs of correctness. The paper also discussing the code generation into Fortran-95 parallelised with OpenMP. We demonstrate the effectiveness of our approach using a real-world exemplar, the Large Eddy Simulator for Urban flows, and show that our algorithm successfully removes 31 out of 37 intermediate arrays, reducing the memory footprint by a factor of 6.6x. As stencil code is generally memory bandwidth limited, the memory-reduced code can actually be faster than the reference code, as evidenced by a speed-up of 2x for our exemplar.


GLASS Inaugural session (05 October, 2021)

Speaker: Prof. Dimitrios Pezaros

We will be starting our seminar series next week, at the usual time (Tuesdays,
2.00pm-3.00pm). We will start with online seminars, exploring the possibility
of some hybrid model for the future.

I will give the first seminar on Tuesday, 5th Oct, 2.00pm - I will
give an overview/update of the Section and welcome new members and, if time
allows, talk about research too.


Systems Seminar: Trustable Credit: how cyberphysical systems will need to prove the quality of their data for new natural capital finance markets (03 August, 2021)

Speaker: Hannah Rudman

This talk should be of interest to people thinking about internet of things sensors, satellite and drone earth observation data, digital device and data standards, as well as those working on green and nature-based finance and natural economy projects.

Speaker bio:

Dr Hannah Rudman is SRUC's Senior Challenge Research Fellow & Data Policy Lead. Hannah's work is grounded in interdisciplinary Participatory Action Research and focuses on digital and data innovations. Hannah is convenor of Trustable Credit, an adhocracy co-developing open standards for digital devices and their data measuring carbon, biodiversity and nature improvements. She is co-leader of the Scottish Conservation Finance Pioneers Group, and sits on the Advisory Team of UK AgriTech Centre Agrimetrics. Hannah is on the independent advisory board of the £10m DECaDE Centre for the Decentralised Economy, is a technical expert on the Digital Identity Scotland programme, and is an expert evaluator for the European Innovation Council, and was formerly member of the Scottish Government's 2020 Climate Group. She co-authored "Distributed Ledger Technologies in Public Services", a report commissioned for The Scottish Government in 2018.

Hannah was elected as Fellow of the British Computer Society in 2016, as Honorary Fellow at Durham University in 2015, and as Fellow of the Royal Society of Arts, Manufacture and Commerce in 2007. She has a PhD in Information Systems and is also an experienced non-executive director. Hannah currently serves as Board Trustee and Director for the National Galleries of Scotland.


Systems Seminar: Improving GHC Haskell NUMA Profiling (22 June, 2021)

Speaker: Phil Trinder

Short Abstract:
Lots of pretty pictures of NUMA usage by GHC Haskell Programs.
 
Technical Abstract:
As the number of cores increases Non-Uniform Memory Access (NUMA) is becoming increasingly prevalent in general purpose machines. Effectively exploiting NUMA can
significantly reduce memory access latency and thus runtime by 10-20%, and profiling provides information on how to optimise. Language-level NUMA profilers are rare, and mostly profile conventional languages executing on Virtual Machines. This talk reports work where we profile, and develop new NUMA profilers for, a functional language executing on a runtime system.
 
We start by using existing OS and language level tools to systematically profile 8 benchmarks from the GHC Haskell nofib suite on a typical NUMA server (8 regions, 64 cores). We propose a new metric: NUMA access rate that allows us to compare the load placed on the memory system by different programs, and use it to contrast the benchmarks. We demonstrate significant differences in NUMA usage between computational and data-intensive benchmarks, e.g. local memory access rates of 23% and 30% respectively. We show that small changes to coordination behaviour can significantly alter NUMA usage, and for the first time quantify
the effectiveness of the GHC 8.2 NUMA adaption.
 
We identify information not available from existing profilers and extend both the numaprof profiler, and the GHC runtime system to obtain three new NUMA profiles: OS thread allocation locality, GC count (per region and generation) and GC thread locality. The new profiles not only
provide a deeper understanding of program memory usage, they also suggest ways that GHC can be adapted to better exploit NUMA architectures.


Systems Seminar: Towards QoS-aware Provisioning of Chained Virtual Security Services in Edge Networks (27 April, 2021)

Speaker: Mircea Iordache Șică

 Future networks are expected to deliver low-latency, user-specific services in a flexible and efficient manner. Operators have to ensure infrastructure resilience in the face of such challenges, while maintaining service guarantees for subscribed users. One approach to support emerging use cases is through the introduction and user of virtualised network functions (VNFs) at the edge of the network. While placement of VNFs at the network edge has been previously studied, it has not taken into account services comprised of multiple VNFs and considerations for network security. In this paper we propose a mathematical model for latency-optimal on-path allocation of VNF chains on physical servers within an edge network infrastructure, with special considerations for network security applications and operator's best practices. We acknowledge the challenges of employing optimal solutions in real networks and provide the Minimal Path Deviation Allocation algorithm for placement of security-focused network services in a distributed edge environment, minimising end-to-end latency for users. We then evaluate our placement results over a simulated nation-wide network using real-world latency characteristics. We show that our placement algorithm provides near-optimal placement, with minimal latency violations with respect an optimal solution, whilst offering robust tolerance to temporal latency variations.


Systems Seminar: Optimized Contextual Data Offloading in Mobile Edge Computing (27 April, 2021)

Speaker: Ibrahim Alghamdi

Mobile Edge Computing (MEC) is a new computing paradigm that moves computing resources closer to the user at the edge of the network. The aim is to have low-latency, high bandwidth, and to improve energy consumption when running computational tasks. The idea of deploying MEC servers near to the users along the 5G technology has led to open an interest in the field of Vehicular Network (VN). MEC servers can play significant roles in improving the performance of VN applications. In this environment, offloading computational tasks over collected contextual data by the mobile nodes (smart vehicles for example) meets the challenge of when & where to offload the collected data while on the move. In this work, we modelled the problem of offloading contextual data to the MEC servers as an optimal stopping problem. Our objectives are to offload to a MEC server with lower execution time and before the collected data get stale. We evaluated our model using real mobility trace with real servers’ utilization; the results showed that the proposed model outperforms other offloading methods.


Systems Seminar: Towards Wireless-enabled Multicore Computer Architectures (23 March, 2021)

Speaker: Sergi Abadal

Abstract:

The main design principles in computer architecture have shifted from a monolithic scaling-driven approach towards the emergence of heterogeneous architectures that tightly co-integrate multiple specialized computing and memory units. This heterogeneous hardware specialization requires interconnection mechanisms that serve the architecture’s communication demands. State-of-the-art approaches combine the Network-on-Chip (NoC) and Network-in-Package (NiP) approaches to interconnect the components, yet using wired solutions that are rigid and, thus, unable to provide the efficiency and architectural flexibility required by future key applications. In this talk, the approach proposed in the European project WiPLASH to bridge this gap will be presented. The project aims to pioneer an on-chip wireless communication plane able to provide architectural plasticity, this is, reconfigurability and adaptation to the application requirements to achieve very high performance without any loss of generality. In the talk, we will first outline the theoretical foundations of key hardware and software enablers of wireless communication at the chip scale, this is, the antennas and protocols, respectively. Then, we will describe how wireless technology may impact the design of NoC/NiP for heterogeneous multi-chiplet systems, and provide examples of novel architectures that may greatly benefit from the scalable broadcast and system-level flexibility provided by wireless interconnects. 

Biography:

Sergi Abadal received the PhD in Computer Architecture from the Department of Computer Architecture, Universitat Politècnica de Catalunya (UPC), Barcelona, Spain, in July 2016. Previously, he had obtained the M.Sc.and B.Sc. in Telecommunication Engineering at UPC in 2011 and 2010, respectively. He has held several visiting positions in Georgia Tech in 2009, University of Illinois Urbana-Champaign in 2015/2016/2019, and the Foundation of Research and Technology – Hellas in 2018. Currently, he is Coordinator of the FET-OPEN WIPLASH project, and also works as recipient of a NEC Labs research fellowship. He is Area Editor of the Nano Communication Networks (Elsevier) Journal, has served as TPC member of more than 15 conferences, and has published over 75 articles in top-tier journals and conferences. Abadal was the recipient of the Nano Communication Networks Young Investigator in 2019, UPC Outstanding Thesis award in 2016, the INTEL Doctoral Fellowship in 2013, Accenture Award for M.Sc. students in 2012. His current research interests are in the areas of chip-scale wireless communications, including channel modeling and protocol design, and the application of these techniques for the creation of next-generation computer architectures.


Systems Seminar: Edge Computing enabled Deep Learning for Smart Mobile DNA Malaria Diagnostics (16 March, 2021)

Speaker: Lito Michala

This talk will present our recent submission to Nature Engineering which is under review. The paper’s abstract is: "One challenge in infectious disease diagnosis is that results need to be rapidly communicated to doctors once testing has been completed, in order to implement care pathways. This is a significant obstacle when testing is decentralised at different sites, as is the case in low-resource rural communities, where such diseases cause the largest burden. Here we demonstrate an easy-to-use, end-to-end Internet-of-Things platform for multiplexed DNA malaria diagnosis, enabled on a mobile phone providing an edge computing capability. We overcome current ethical concerns associated with secure data connectivity by using blockchain to provide trustworthiness, privacy and endorsement whilst accessing our secured healthcare data management and deep learning decision- support tools. The system, which was validated in villages in rural Uganda, correctly identified >98%of tested cases, reducing the need for expert human intervention and providing confidence in the interpretation of the results. The platform also provides secure geotagged diagnostic information opening up the future possibility of the secure integration of infectious disease data within surveillance frameworks.” Any feedback and questions will be very very welcome.


A usability model for online coding tutorials systems (09 March, 2021)

Speaker: Ohud Alasmari

Online Coding Tutorial systems provide a basis for free and open interactive programming education at scale. Such browser-based systems featuring automated feedback are increasingly popular as remote learning has become normalized. Programming students with different social and cultural backgrounds from all over the world can access these platforms. In addition, Online Coding Tutorials facilitate practical software development experiences that form an integral part of the learning process for novice programmers. However, such systems will only be truly effective if they meet diverse programming learner requirements. In this paper, we argue that these requirements must be informed by a range of disciplines, including system usability, computing pedagogy, and internationalization. We conducted a wide-ranging survey of partially relevant usability models; from these studies we synthesized a new and specialized hierarchical usability model for Online Coding Tutorial systems. This new model has four dimensions: pedagogy, platform, culture and cognition. We claim that, in relation to previous models, our multi-dimensional framework covers a more comprehensive range of requirements for online programming language learners. This framework can be used to characterize and compare existing online tools, as well as to inform good design practice for new tools. We provide initial evidence of the potential utility of our model by applying it to three mainstream programming learning tools: LearnPython, TryRuby, and TryJavaScript.


Playing Planning Poker in Crowds: Human Computation of Software Effort Estimates (09 March, 2021)

Speaker: Mohammed Alhamed

Reliable cost effective effort estimation remains a considerable challenge for software projects. Recent work has demonstrated that the popular Planning Poker practice can produce reliable estimates when undertaken within a software team of knowledgeable domain experts. However, the process depends on the availability of experts and can be time-consuming to perform, making it impractical for large scale or open source projects that may curate many thousands of outstanding tasks. This paper reports on a full study to investigate the feasibility of using crowd workers supplied with limited information about a task to provide comparably accurate estimates using Planning Poker. We describe the design of a Crowd Planning Poker (CPP) process implemented on Amazon Mechanical Turk and the results of a substantial set of trials, involving more than 5000 crowd workers and 39 diverse software tasks. Our results show that a carefully organised and selected crowd of workers can produce effort estimates that are of similar accuracy to those of a single expert.


Systems Seminar: The NIS Directive & Supply Chain Resilience (16 February, 2021)

Speaker: Tania Wallis

This project on Supply Chain Resilience,funded by the Research Institute in Trustworthy Interconnected Cyber-physical systems (RITICS), explores the cybersecurity of our critical infrastructure from the nexus of business, engineering and public policy interests. It investigates the impactof the EU Directive on Security of Network and Information Systems (NIS Directive) on securing essential services in the energy, water and transport sectors.Recent supply chain attacks demonstrate that suppliers are a potential attack route into multiple operators. Operators can also be caught in the crossfire of attacks targeted elsewhere due to sharing common vulnerabilities. This research presents thechallenges from the different perspectives of providers of essential services, their suppliers and regulators. It highlights that individual operators are not in a position to fully secure their infrastructure and services without reaching out in collaboration with other organisations to develop combined sector approaches and to influence improvements in their supply chains. The project recommends a governance framework that balances regulatory control with sector collaborations to drive the necessary improvements.


Systems Seminar: Introducing CHERI (09 February, 2021)

Speaker: Jeremy Singer

The CHERI projectoriginally started at Cambridge over ten years ago. The high-level concept involves adding hardware-supported metadata to pointer values (known as _capabilities_). This is effectively hardware-accelerated security. Capabilities enforce tight addressing bounds, so we can achieve fine-grained memory protection and low-overhead software compartmentalization. CHERI/capabilities have major implications for instruction set architecture design, with extended register sets, additional capability management instructions, and code generation. Current instantiations of CHERI are available for MIPS, RISC-V and Arm.
 
A baseline set of software has already been ported to CHERI architectures, including FreeBSD and LLVM. The community is now looking to expand beyond low-level applications written in C - which is where our "Capable Virtual Machines" project comes in ... we intend to port high-level language runtimes such as the v8 JavaScript engine to CHERI. In this talk we will introduce CHERI, summarize our initial project exploration and speculate as to where we might end up in three years.


Systems Seminar: Internet Complexity Through Five Decades of RFCs (26 January, 2021)

Speaker: Ignacio Castro

Abstract:

Operation of the Internet requires interoperability between networks, systems, and applications, as well as cooperation among a growing number of stakeholders. The IETF RFC series is critical in supporting this cooperation and interoperability. As the series marks its 50th anniversary, we measure the shifts and trends that have emerged, including the rise and fall of large Internet players, and the shift in relevance between industry and academia. In addition, we show that the Internet's growth and maturity has given rise to a longer standardisation process, and provide initial insight into the growing complexity.


Systems seminar: Secure Monitoring System for Industrial Internet of Things using Searchable Encryption and Access Control (15 December, 2020)

Speaker: Jawhara Alamri

The technological advancements on the Internet of things (IoT) and related technologies leads to revolutionary advancements in many sectors. One of these sectors is the industrial sector, which leverages these IoT technologies forming the Industrial Internet of Things (IIoT). IIoT has the potential to enhance the manufacturing process by improving the quality, traceability, and integrity of the industrial processes. This enhancement is achieved by deploying IoT devices (sensors) across the manufacturing facilities; therefore, Monitoring Systems are required to collect (from multiple locations) and analyse the data, usually on the cloud. As a result, IIoT Monitoring Systems should be secure, privacy-preserving and provide real-time responses for critical decision making.  

In this talk, I will present searchable encryption and access control approaches. Using Plastic industry as an example, I'll show how the combination of these approaches can be used in smart factories, and how it can help them to control remotely, make smart decisions and improve the production process.


Systems seminar: Clustering Strategies in WirelessHART networks. (08 December, 2020)

Speaker: Nouf AlHarbi

The WirelessHART protocol is the most promising IWSN protocol due to its wide acceptance in industrial automation, monitoring, and process control, but faces more challenges than WSNs, since wireless communications are affected by interference, noise and physical obstacles that are generally present in industrial environments. On the network layer in WirelessHART, graph routing algorithm is introduced for packet transmission based on a first-path approach available to convey packets from a source to the gateway. However, such a technique may cause a hotspot problem due to imbalanced energy consumption.
 
In this talk, I will present an overview of WirelessHART protocol. Next, graph routing algorithm in WirelessHART will be discussed, showing how clustering-based graph routing can be applied in WirelessHART to achieve the best cooperation between sensor nodes, and overcome the problem of hotspot around the gateway.


Systems seminar: Understanding the Implications of Content Moderation (08 December, 2020)

Speaker: Abdulwhab Alkarashi

The development of moderator practices has largely been ad hoc and unstudied. We do not know how moderators should best react to different forms of undesirable behaviour in different contexts. We do not really understand how legitimate disagreement can deteriorate into undesirable abusive behaviour, or how to detect whether this is happening on multiple scales. Moderators can therefore be overwhelmed. Moderators of online discussions also respond differently, engaging in specific tactics to maintain debate quality that are specific to settings. In particular, what seems to be acceptable in one community may not be acceptable in another.
 
In this talk, I will illustrate the current work in understanding the cases of abusive content and how they can interplay the online contributions overall.


Systems Seminar: A Feasibility Study of Cache in Smart Edge Routers (01 December, 2020)

Speaker: Paul Harvey

Abstract:

With the coming of the 'age of edge', many people are turning to in-home edge computing. In particular, they are using content delivery networks (CDNs) as the application of choice. But should they? This talk discusses our initial efforts to understand the feasibility of using an in-home smart edge router as practical location for caching. Using publicly available datasets, as well as a sampled subset of our online marketplace traffic of 111 million users applied to our work in progress simulation tool, we ask and try to answer a number of questions on hardware scoping, cache locations, and ultimately, appropriateness. 

In addition to this work, I will also introduce our new research lab - Rakuten Mobile Innovation Studio - based in Tokyo.


Systems Seminar: FLOQ: FLow Optimised Queuing (24 November, 2020)

Speaker: Mihail Yanev

Implemented in routers active queue management algorithms (AQMs) decide when and how to drop packets. This prevents buffer bloat and makes sure Inetnet traffic remains within rates that would not overload the network. However, dropping specific packets can have massive impact on performance. For example, dropping the initial packets in browsing connections is known to result in big inflation of page load times. However, none of the massively deployed queue management algorithms have a means of prioritising some traffic over another.
 
In this talk, I will present FLOQ - a novel AQM approach that keeps up with the current state-of-the-art in terms of performance. Additionally, it allows for traffic prioritisation. We present an example where such prioritisation was used to solve the large PLTs problem with no negative impact on other traffic types.


Systems Seminar: Parsing Protocol Standards (24 November, 2020)

Speaker: Stephen McQuistin

Internet protocol standards have been slow to adopt formal description techniques, and are still largely written in English prose. While this supports the social aspects of the standardisation process, inconsistencies and ambiguities are easily introduced. The use of formal specification languages would make standards documents machine-readable, allowing them to be checked for consistency, and enabling automatic parser generation. In this talk, I'll outline the social and technical barriers to the adoption of existing techniques, and identify the missing features needed to overcome them. 
 
I'll present the Network Packet Representation, a typed protocol representation that can describe the format of protocol data, including data-dependent formats, contextual information needed to maintain parser state, and functions needed for multi-stage parsing. Using TCP as an example, I'll show how the Network Packet Representation can be used to generate parser code in Rust, and how it can be integrated with the existing standards process.


Systems seminar: Systems section overview and discussion (17 November, 2020)

Speaker: Dimitrios Pezaros

I will give an overview of the Section, updating members on our collective status and main activities, as well as some recent initiatives we would like members’ feedback/contribution on. I am hoping the seminar to be a starting point for discussion. If time allows for a ‘second part’, I will give a high-level overview of my group’s research activities, mainly as a stimulus for potential wider collaboration. If no time, I will leave this second part for a separate slot.


Systems seminar: Pricing Python Parallelism (10 November, 2020)

Speaker: Dejice Jacob

The ALPyNA framework analyses moderately complex Python loop nests andautomatically JIT compiles code for heterogeneous CPU and GPUarchitectures. Execution times may be reduced by offloading parallelloop nests to a GPU.
ACM is an analytical cost model for auto-parallelising loopnests in a dynamic language on heterogeneous architectures.GPU execution time prediction must account for factors like data transfer,block-structured execution, and starvation.We show that a comparatively simple, staged analytical model can accuratelydetermine during execution when it is profitable to offload a loop nest.


Systems Seminar: Automatic Design Space Exploration (DSE) for Program Optimization is a feasible solution (sometimes) (03 November, 2020)

Speaker: Cristian Urlea

In this talk I will present my solution to the problem of exploring a large design space of program optimizations for streaming data-flow applications, where an accurate cost-performance model exists. The exact solution is formulated in the context of the TyTra compiler framework which targets Filed Programmable Gate Arrays (FPGAs), however, in it's most general form, it can be applied to other problem domains such as the optimization of Convolutional Neural Networks or Image processing frameworks. Through this talk I hope to convince you that efficient DSE strategies for optimization problems can be mechanically recovered with a lot less effort than widely thought.


Compiling and optimising deep neural networks for inference with TVM (27 October, 2020)

Speaker: Perry Gibson

This talk discusses the field of deep learning compilers, with a focus on the TVM stack. The talk will explore the features of TVM, alternative approaches, the types of application knowledge that deep learning compilers can leverage, as well as research the speaker has conducted using TVM.


Seiðr: Dataplane Assisted Flow Classification Using ML (27 October, 2020)

Speaker: Kyle Simpson

Real-time, high-speed flow classification is fundamental for network operation tasks, including reactive and proactive traffic engineering, anomaly detection and security enhancement. Existing flow classification solutions, however, do not allow operators to classify traffic based on fine-grained, temporal dynamics due to imprecise timing, often rely on sampled data, or only work with low traffic volumes and rates. In this paper, we present Seiðr, a classification solution that: (i) uses precision timing, (ii) has the ability to examine every packet on the network, (iii) classifies very high traffic volumes with high precision. To achieve this, Seiðr exploits the data aggregation and timestamping functionality of programmable dataplanes. As a concrete example, we present how Seiðr can be used together with Machine Learning algorithms (such as CNN, k-NN) to provide accurate, real-time and high-speed TCP congestion control classification, separating TCP BBR from its predecessors with over 88–96 % accuracy and F1-score of 0.864–0.965, while only using 15.5 MiB of memory in the dataplane.


Systems Seminar: UAV Solutions Based on Wireless Communications (20 October, 2020)

Speaker: Carlos Tavares Calafate

In this talk an introduction to unmanned aerial vehicles will be made, detailing the main research challenges. Next, different applications and systems developed in our research group will be discussed, showing how wireless communications can be used to achieve advanced features such as swarm creation and collision avoidance.


Systems seminar: Notes on Notebooks: Is Jupyter the Bringer of Jollity? (13 October, 2020)

Speaker: Jeremy Singer

As the interactive computational notebook becomes a more prominent code developmentmedium, we examine advantages and disadvantages of this particularsource code format. We specify the structure of a codingnotebook layout. We describe complexities in notebook programming; some of these are incidental whereas others may be inherentcomplexities. We outline how we envisage research and developmentmightproceed to advance the cause of notebook programming.


Systems Seminars: Optimizing Image Processing Pipelines with a Domain-Extensible Compiler (06 October, 2020)

Speaker: Thomas KOEHLER

Halide and many similar projects have demonstrated the great potential of domain specific optimizing compilers. They enable programs to be expressed at a convenient high-level, while generating high-performance code for parallel architectures. As domains of interest expand towards deep learning, probabilistic programming and beyond, it becomes increasingly clear that it is unsustainable to redesign domain specific compilers for each new domain. In addition, the rapid growth of hardware architectures to optimize for poses great challenges for designing these compilers.
In this talk, I will show how to extend a unifying domain-extensible compiler with domain-specific as well as hardware-specific optimizations. The compiler operates on a generic representation of computational patterns that have proven flexible enough to express a wide range of computations. Optimizations are not hard-coded into the compiler but are expressed as user-defined rewrite rules that are composed into strategies controlling the optimization process. Crucially, both computational patterns and optimization strategies are extensible without modifying the core compiler implementation.

I will demonstrate through a case study that this compiler is capable of applying well-known image processing optimizations. On four mobile ARM multi-core CPUs, the code generated for the Harris operator outperforms the image processing library OpenCV by up to 16x and achieves performance close to - or even up to 1.4x better than - the state-of-the-art image processing compiler Halide.


Systems Seminars: Tiered vs Tierless IoT Stacks - Comparing Smart Campus Software Architectures (06 October, 2020)

Speaker: Adrian Ramsingh

IOT software stacks are notoriously complex, conventionally comprising multiple tiers/components and requiring that the developer not only uses multiple programming languages, but also correctly interoperate the components. A novel alternative is to use a single tierless language with a compiler that generates the code for each component, and for their correct interoperation. 

We report the first ever systematic comparison of tiered and tierless IOT software architectures. The comparison is based on two implementations of a non-trivial smart campus application. PRSS has a conventional tiered Python-based architecture, and CWSS has a novel tierless architecture based on Clean and the iTask and mTask embedded DSL. An operational comparison of CWSS and PRSS demonstrates that they have equivalent functionality, and that both meet the UOG smart campus %functional and operational
requirements.

Crucially, the tierless CWSS stack requires 70% less code than the tiered PRSS stack. We analyse the impact of the following three main factors. (1) Tierless developers need to manage less interoperation: CWSS uses two DSL in a single paradigm where PRSS uses five languages and three paradigms. (2) Tierless developers benefit from automatically generated, and hence correct, communication. (3) Tierless developers can exploit the powerful high-level abstractions such as TOP in CWSS. A far smaller and single paradigm codebase improves software quality, dramatically reduces development time, and improves the maintainability of tierless stacks.


Systems Seminar: Adaptive Disconnection Tolerant Opportunistic Networks and Systems (20 May, 2020)

Speaker: Milena Radenkovic

This talk will give an overview of  Dr. Milena Radenkovic's research which spans  intelligent and distributed systems that are agile and adaptive to users' demands and dynamic patterns of the underlying networks.  It will then show examples of smart manufacturing, self organised distributed  security and privacy and cognitive vehicular charging works designed and built using a range of novel real-time  multi-layer collaborative predictive complex spatio-temporal approaches.


Remote Systems Seminar: Towards reliable and efficient debugging on Unix-style platforms (29 April, 2020)

Speaker: Stephen Kell

Software continues not to be soft. Not coincidentally, most software continues to be built atop Unix-like compiler toolchains and process runtimes having only esoteric and limited support for introspection, run-time change and other debugging-flavoured facilities.

In the first part of this talk, I'll talk about improving the reliability and efficiency of (one specific kind of) debugging information on such platforms -- information used not only by debuggers, but it also in program analysis tools, and even by the runtimes of high-level programming languages (e.g. for exception handling in C++). Generating such information correctly and completely is onerous for compiler authors, leading to a history of subtle bugs and limitations. Meanwhile, living with incomplete or incorrect information degrades the experiences of developers and users. Focusing on /stack walking/, I'll describe three techniques: one for validating the DWARF frame information tables used for this; one for the synthesis of such tables (e.g. for binaries that lack them); and one for precompiling unwind tables into native code, which results in a 25x DWARF-based unwind speedup in tools such as perf. This work appeared at OOPSLA 2019.

In the second part, I'll zoom out to describe why all this matters. While, for example, robust and efficient stack unwinding has been achieved in language virtual machines without techniques like the above, the Unix-style variant of the problem is both harder and, as I'll argue, more essential -- in that at a solid foundation for commodity software systems can *only* realised by evolving Unix, not merely by building atop it.


Remote Systems Seminar: Quantitative Types in Idris 2 (22 April, 2020)

Speaker: Edwin Brady

Dependent types allow us to express precisely what a function is intended to do. Recent work on Quantitative Type Theory (QTT) extends dependent type systems with linearity, also allowing precision in expressing when a function can run. This is promising, because it suggests the ability to design and reason about resource usage protocols, such as we might find in distributed and concurrent programming, where the state of a communication channel changes throughout program execution. Up to now, however, there has not been a full-scale programming language with which to experiment with these ideas. Idris 2 is a new version of the dependently typed language Idris, with a new core language based on QTT, supporting linear and dependent types. In this talk I will show the benfits of QTT in Idris 2, in particular how it improves interactive program development by reducing the search space for type-driven program synthesis; and, how resource tracking in the type system leads to type-safe concurrent programming with session types.


Zoom meeting: E-mail Stephen.McQuistin@glasgow.ac.uk for the URL.


Cancelled: Intra-Systems Seminar (17 March, 2020)

Speaker: Saad Alahmari

Title: A Model for Describing and Maximising Security Knowledge Sharing to Enhance Security Awareness
Speaker: Saad Alahmari (30mn)

Employees play a crucial role in enhancing information security in the workplace, and this requires everyone having the requisite security knowledge and know-how. To maximise knowledge levels, organisations should encourage and facilitate Security Knowledge Sharing (SKS) between employees. To maximise sharing, we need first to understand the mechanisms whereby such sharing takes place and then to encourage and engender such sharing. A study was carried out to test the applicability of Transactive Memory Systems Theory in describing knowledge sharing in this context, which confirmed its applicability in this domain. To encourage security knowledge sharing, the harnessing of Self-Determination Theory was proposed— satisfying employee autonomy, relatedness and competence needs to maximise sharing. Such sharing is required to improve and enhance employee security awareness across organisations. We propose a model to describe the mechanisms for such sharing as well as the means by which it can be encouraged.


Systems Seminar: Using Google BigQuery for Analysis of RIPE Atlas Measurement Data (11 March, 2020)

Speaker: Stephen Strowes

The RIPE Atlas measurement platform collects hundreds of millions of individual network measurements every day from over 10,000 vantage points. Although most of this data is publicly available, we do not provide a general means to inspect broad cross-sections of this dataset.

We are investigating the use of large-scale data storage/analysis platforms, in particular Google's BigQuery, for handling this data. We're part-way between a prototype implementation last year, and a more permanent implementation this year, from which data will be made generally available.

In this talk I'll cover the network measurements handled by the RIPE Atlas platform, how we support researchers using the platform, and how we intend to open up the resulting data for more public access.


Intra-Systems Seminar (10 March, 2020)

Speaker: Rongxiao Fu, Michel Steuwer

Title: Strategic Rewriting with Mini-Elevate
Speakers: Rongxiao Fu, Michel Steuwer (40mn)

Program transformation based on rewrite rules is a classic and important method for program optimisation. We propose to separate the specification of computations from specifying optimizations in a more principled way than existing scheduling languages for example from Halide or TVM. In our approach, optimization strategies are expressed as compositions of individual rewrite rules. We are developing a strategy language where strategies have a specific type that facilitates their composition. However, the types of strategies may not be expressive enough to provide detailed information about the properties of strategies such as the expected shape of input expressions, making the strategies’ behaviours less predictive. As our proposed solution to this problem, Mini-Elevate, a row-polymorphic language designed for strategic term rewriting, will be introduced in this talk. Representative examples will be given to show that how polymorphic rows describe the shape of expressions and how the behaviours of strategies can be reflected in their types.

 


Systems Seminar: New Architectural Simulators for Developing Next-Generation Computing Platforms: MGPUSim and STONNE (04 March, 2020)

Speaker: José Luis Abellán

The Cloud, Fog and Edge computing model is a 3-tier strategy aimed at providing the required processing capability demanded by contemporary data-driven applications such as Artificial Intelligence and Big Data analytics. Given the growing demand of data processing due to the explosion of Internet-of-Thing devices count, it is necessary to provide better scale-up and scale-out computing power with higher energy efficiency to the overall 3-tier compute infrastructure. To this aim, aside from leveraging the most advanced communication technologies (wireless and wired) and storage resources, every single computing node at any single tier (from high-end servers at cloud and fog, to low-end battery-operated devices at the edge) must be optimized and, as Dennard scaling and Moore’s law have come to an end, new integration technologies (2.5 stacking), communication technologies (NVIDIA NVLink and photonics), and heterogeneous and specialized architectures are the most promising candidates to explore. Due to this evolving and wide design-space exploration, researchers from academia and industry must be equipped with the right tools for fast prototyping of their architectural design ideas. This way, accurate, fast and flexible open-source architectural simulators that can faithfully simulate and validate the forthcoming computing developments are a must. However, state-of-the-art simulators are not capable of truly fulfilling all these challenging requirements.
To bridge this gap, in this talk I am going to present MGPUSim and STONNE, our two cycle-accurate architectural simulators of a high-end computing node at the cloud, and a low-end computing device at the edge, respectively, that can help researchers quickly explore their architectural design ideas for faster development of the next-generation computing platforms.


Intra-Systems Seminar (03 March, 2020)

Speaker: Colin Perkins

Title: Can we Improve Internet Protocol Standards?
Speaker: Colin Perkins (30mn)

This talk will give a brief outline of some ongoing work to improve the way Internet protocol standards are developed. It will review how protocols are developed into standards that can be implemented by industry, consider some of the challenges in developing such standards, and review new tooling and approaches we're exploring to improve the process. The goal of the work is to improve the quality and trustworthiness of the Internet by improving the quality of its underlying specifications.


CANCELLED: Systems Seminar: Technology-neutral technology (26 February, 2020)

Speaker: Tristan Henderson

While our data-driven society has given us lots of COOL NEW STUFF and exciting applications for computer scientists to build and study, there have also been some less positive consequences, as evinced by current controversies such as Cambridge Analytica, discriminatory or biased classification algorithms, or power imbalances between large data controllers such as Google and individual data subjects. Technology laws such as the General Data Protection Regulation are one mechanism of protecting citizens. One principle of such laws is that they should be "technologically neutral" (GDPR recital 15). This can help promote innovation by not discriminating against particular technologies, or prevent law from becoming out of date. But as Hildebrandt and Tielemans (2013) point out, technology neutral law might also require some technology specifics in order to prevent technological threats to human rights.

This talk will introduce the concept of technology neutrality and compare it to something perhaps more familiar to systems designers: abstraction and layering. I compare the objectives of technology neutrality and the objectives of building long-lived systems such as the Internet, and see what the two can learn from each other. I will also discuss some ongoing empirical work with respect to the GDPR: can we trust technologists to build systems on their own or do we need other regulatory modalities (Lessig's pathetic dot) to ensure that systems achieve desired societal outcomes?

Speaker bio:

Tristan Henderson is a Senior Lecturer in Computer Science at the University of St Andrews. He has worked in the broad area of networked communications for two decades. His research aims to better understand user behaviour in networked systems and use this to build improved systems; an approach which has involved measurements and testbeds for networked games, wireless networks, mobile sensors, smartphones, online social networks and opportunistic networks. Most recently his work has moved into the privacy and ethical aspects of such research, which has led in turn to an interest in the law and how technology, ethics and the law can jointly regulate behaviour.

Tristan holds an MA in Economics from Cambridge, an MSc and PhD in Computer Science from UCL, and recently completed an LLM in Innovation, Technology and the Law at Edinburgh. For more information see https://tnhh.org/


Intra-Systems Seminar (25 February, 2020)

Speaker: Ibrahim Ahmed

Title: On the Optimality of Tasks Offloading in Mobile Edge Computing Environments.

Speaker: Ibrahim Ahmed (30mn)

Abstract:

Mobile Edge Computing (MEC) has emerged as a new computing paradigm to improve the QoS of mobile nodes’ applications. A use case  in MEC is the computation offloading, whose goal is to enhance the mobile nodes' capabilities to face the requirements of new applications. While the mobile node is on the move and lacks information about the potential MEC  servers  for offloading,  computation offloading faces the challenges of where (when) to offload computing tasks.  In my talk, I will present how we try to solve such challenges by applying the concept of the Optimal Stopping Theory.


Systems Seminar: Energy-aware Self-Adaptation for Application Execution on Heterogeneous Parallel Architectures (19 February, 2020)

Speaker: Karim Djemame

Hardware in High Performance Computing environments has increasingly become more heterogeneous in order to improve computational performance. An additional aspect of heterogeneous systems is the management of power and energy consumption. The increase in heterogeneity requires middleware and programming model abstractions to address the complexities that it brings. This talk will explore application level self-adaptation including aspects such as automated configuration and deployment of applications on heterogeneous infrastructures as well as their redeployment. It will present a self-adaptive framework that manages application Quality of Service (QoS) at runtime, which includes the automatic migration of applications between different acceleration infrastructures. The discussion covers when migration is appropriate and quantifies the likely benefits.


Systems Seminar: Privacy and security risk – making up the numbers? (12 February, 2020)

Speaker: Eerke Boiten

Having an accurate view of privacy and security risk is both obviously useful and typically elusive. Quantification should help in theory, but  general risk assessment methods don’t convince in this respect. This talk considers two specific scenarios from separate research projects (privacy impact assessment, and cyber intelligence sharing) and attempts at quantification in each. Are there general lessons from considering these individually and together?


Intra-Systems Seminar (11 February, 2020)

Speaker: Marco Cook

Title: Introducing a Forensic Data Taxonomy for Programmable Logic Controllers (PLCs)

Speaker: Marco Cook (30mn)

Abstract:

The understanding of available data artefacts from any sources of interest is fundamental to performing digital forensics. The acquirable data artefacts from IT systems such as Windows and Linux operating systems is well established, however this is not the case for programmable logic controllers (PLCs), commonly found within safety-critical systems, which often contain proprietary technologies and architectures. In this talk I will present a taxonomy that introduces PLC data types and potential forensic artefacts, as well as some of the data acquisition methods that were used to establish this taxonomy.


Systems Seminar: Challenges in the Decentralised Web (05 February, 2020)

Speaker: Gareth Tyson

The Decentralised Web (DW) has recently seen a renewed momentum,  with a number of DW platforms like Mastodon and PeerTube gaining  increasing traction. These offer alternatives to traditional  'centralised' social networks like Twitter and YouTube by enabling  the operation of web infrastructure and services without centralised  ownership or control. They do, however, raise a number of key  challenges related to performance, security and resilience. In this  seminar, I will present a measurement study of the DW, and discuss  our empirical exploration of some of the key challenges in this  area. The presentation is based on a recent paper published in the ACM Internet Measurement Conference 2019 (http://www.eecs.qmul.ac.uk/~tysong/files/Mastodon.pdf).


Systems Seminar: Harnessing the power of GPUs (ABM framework on GPU) (29 January, 2020)

Speaker: Mozghan Kabiri Chimeh

Modelling and simulation of complex problems has become an established ‘third pillar’ of science, complementary to theory and experimentation. In order for multi-agent modelling and simulation to be used as a tool for delivering excellent science, it is vital that simulation performance can scale, by targeting readily available computational resources effectively. In this talk, I will present to you an Agent Based Modelling (ABM) framework on GPU as an example application which makes the life of modellers easy by allowing them to concentrate on writing models without the need to acquire specialist knowledge typically required to utilise GPU architectures.

Moreover, I will talk about Hackathon and Bootcamps; community driven effort to accelerate codes on GPUs and ways to help researchers/developers start accelerating/optimizing code on GPU within a short period of time and make their life easy.


Intra-Systems Seminar (28 January, 2020)

Speaker: Phil Trinder

Title:

Comparing Reliability Mechanisms for Secure Web Servers:
Actors, Exceptions and Futures

Speaker: Phil Trinder (1h)

Abstract:

Security is critical for all software, and especially for web applications that are globally accessible. In a typical online interaction the user's identity, and their access rights, must be verified. That is, the user is authenticated and their actions are authorised.

The web application programming language implements the security protocols and provides reliability mechanisms to recover from security failures. Programming languages offer a choice of reliability mechanisms, and many are available in a single language. Three common reliability mechanisms are as follows, and all are available in languages like Scala or C++. Exceptions are widely available, and often formulated as try-catch blocks. The Actor model adopts a let-it-crash philosophy that allows failing actors to die while an associated supervisor deals with the aftermath. With Futures failures are managed by specifying actions for both successful and unsuccessful outcomes.

Currently, there is little information to guide developers when selecting between reliability mechanisms, and this paper compares the performance and programming complexity of exceptions, actors and futures for handling security failures in Scala web apps. The comparison is based on measurements of three instances of a simple web app that differ only in using exceptions, actors or futures for recovering authentication or authorisation failures. We make the following research contributions.

(1) We compare the performance (throughput and latency) of the reliability mechanisms as the number of concurrent connections to the webserver ranges between 50 and 3200. The workloads we consider include 100% successful, 100% unsuccessful, and a realistic mixture of successful/unsuccessful requests.

(2) We compare the programming effort required to secure the web app and the associated attack surface. The metric we use is the number of program states that the programmer must consider and secure.


Systems Seminar: Optimising Computer Systems in Complex and Dynamic Parameter Space (22 January, 2020)

Speaker: Eiko Yoneki

Performance tuning of computer systems is challenging for a variety of reasons. Modern computer systems expose many configuration parameters in a complex, massive parameter space. The systems are nonlinear and there is no method for quantifying and modelling such systems by performance tuning to the level of precision required. Furthermore, scheduling of tasks or resource allocation may require the control of dynamically evolving tasks. Auto-tuning has recently emerged using a black-box optimiser such as Bayesian Optimisation (BO). However, BO features limited scalability. Reinforcement Learning (RL) could be applied for combinatorial optimisation problems, but there is a gap between current research and practical RL deployments. I will introduce our frameworks to tackle the issues above and demonstrate the potential of machine learning based methodologies for computer systems optimisation.


Intra-Systems Seminar (21 January, 2020)

Speaker: Cristian Urlea

Title: Efficient Design Space Exploration
Speaker: Cristian Urlea (30mn)


Abstract:

Numerous programming languages and frameworks provide the means to optimise application execution on parallel architectures. The process of selecting which optimising transformations to apply, where to apply them, dubbed Design Space Exploration (DSE) largely remains a manual endeavour.
In this talk I will discuss how one may derive an efficient and automated DSE strategy using a description of the application's semantic structure, an accurate cost-performance model and a description of hardware resource limits.
The presentation primarily focuses on DSE as it applies to compiling streaming data-flow applications for execution on a Field Programmable Gate Array (FPGA) however the techniques are generally applicable to other parallel architectures.


Systems Seminar: Metaprogramming, Metaobject Protocols, Gradual Type Checks: Optimizing the "Unoptimizable" Using Old Ideas (15 January, 2020)

Speaker: Stefan Marr

Metaobject Protocols and Type Checks, do they have much in common? Perhaps not from a language perspective. However, under the hood of a modern virtual machine, they turn out to show very similar behavior and can be optimized very similarly.

This talk will go back to the days of Terminator 2, The Naked Gun 2 1/2, and Star Trek VI. We will revisit the early days of just-in-time compilation, the basic insights that are still true, and see how to apply them to metaprogramming techniques of different shapes and forms.


Intra-Systems Seminar (14 January, 2020)

Speaker: Stefanos Sagkriotis

Title: Fighting battery depletion within clusters of edge devices.
Speaker: Stefanos Sagkriotis (30mn)

Abstract: This talk will demonstrate how datacenter container management technology can be coupled with edge/IoT devices. The combination is offering a versatile infrastructure comprised of heterogeneous devices that could formulate micro-clusters at the edge. Since IoT devices can operate on limited energy capacity, we assess the impact of container execution in IoT by building their energy profile. The profiles obtained suggest that the processes for the container management present high energy consumption. We suggest a greedy approach to spread their consumption among the nodes of the cluster.


Systems Seminar: Rethinking Networks and Applications with Programmable Data Planes (08 January, 2020)

Speaker: Gianni Antichi

The possibility to easily add new functionality to network data planes has lately
opened new exciting research directions towards understanding how such a programmability
can impact the design of networks as well as their services. In this talk, I will present some
of the work I have been doing in rethinking the role of programmable switches and network
interface cards in support of network function and end host application acceleration. I will
discuss how different operations, e.g., fast reroute upon link failures, path tracing, can be
efficiently performed by adding new features in the network switches. I will then analyse
how new programmable network interface cards can be also leveraged for application
offloading. Finally, I will discuss open research questions and conclude with an analysis of
the limitations imposed by the state-of-the-art networking systems programming language, i.e., P4.


System Seminar: Delivering easy-to-use frameworks to empower data-driven research (11 December, 2019)

Speaker: Rosa Filgueira

In DARE and VERCE projects we have been working in several solutions for enabling easy data-intensive workflow composition and deployment on clouds and/or HPC systems. Our work translates scientists methods to concrete scientific workflows that can be portable and reproducible on different computing environments without making any (or little) changes. For achieving this, we have combined the strengths of dispel4py, CWL (and previously Pegasus) scientific workflows, Docker containers, Kubernetes infrastructure orchestration, Jupyter notebooks, and Cloud platforms. This work has been in collaboration within the Climatology and Seismology domains.


Systems Seminar: Designing usable APIs and SDKs for developers (20 November, 2019)

Speaker: Steven Clarke

The Developer Division User Experience Research group at Microsoft facilitates over 10,000 interactions per year with software professionals in order to learn about how to improve the experience those professionals have with Microsoft developer tools and services. One area we specialise in is learning what is involved in designing and building APIs and SDKs that developers will not only be able to use, but will enjoy using. In this talk, I will talk about how we use UX research techniques when designing the user experience for an API. I’ll describe what motivated us to pay so much attention to the user experience of an API and will talk about how our approach has changed over the years.


Systems Seminar: Private data collection in sensor and network systems (13 November, 2019)

Speaker: Rik Sarkar

Pervasive sensing and computing devices generate large volumes of data which can be monitored to improve computer systems and infrastructures; but at the same time they pose issues of privacy. Data sharing methods that guarantee privacy of participants are necessary to reassure both contributors and users of data. In this talk, we will discuss recent and ongoing work in the use of statistical methods for private data sharing. The objective of these methods is to privatise data close to the source (e.g. at the network edge) to ensure that non-private data is never used for analytics. We will discuss these ideas in the context of sensing and iot data, distributed/edge computing and measurements in anonymous networks such as Tor.

Speaker Bio: Dr Rik Sarkar is a Lecturer at School of Informatics, U. Edinburgh, and Deputy Director of the Laboratory for foundations of computer science. Dr. Sarkar is known for his work on information processing in sensor network and network analysis. His current interests include statistical privacy, privacy preserving machine learning, reliable learning and network analysis.


Systems Seminar: Protecting the LHC from itself (30 October, 2019)

Speaker: Robbie Simpson

Creating particle collisions powerful enough to discover new physics requires vast amounts of energy.
How is this safely managed?
What happens if something goes wrong?
Can problems be detected in the microseconds before they become critical?
And if we can detect them, what can be done to limit the damage?

These are all challenges for the Machine Protection group at CERN.
In this this talk we'll discuss the interlock and protection systems that ensure
safe operation of the world's most powerful particle accelerator.
We'll take a special look at the engineering challenges of developing and
maintaining the software for accelerator control systems, and at some
of the unique factors of software in high-energy physics.


Intra-Systems Seminar: Enhancing Internet video delivery (30 October, 2019)

Speaker: Mihail Yanev

MPEG's Dynamic Adaptive Streaming over HTTP (MPEG-DASH) is the most used method to deliver video content over the network today. Today, even though the problem of how video should transit the network is solved, the issue of how to deliver the best video quality given network and device constraints remains. In my talk I will (i) explain the motivation behind dynamic streaming (ii) argue what are the issues that make dynamic streaming an interesting problem (iii) suggest an approach which might overcome some of the issues outlined in (ii).
 
P.S.
Yes, my research is about delivering higher quality cat videos faster...


Systems Seminar: 21,117 Repos Later: Developing Tool Support for Courses using Git & GitHub/GitLab (23 October, 2019)

Speaker: Ric Glassey

The use of version control systems within computing education is growing in popularity. However, this is challenging because such systems are not particularly well designed to support educational situations, nor are they easy to use with confidence in teaching, as specialist knowledge and experience is required. At KTH: Royal Institute of Technology, we have five years of experience of introducing first year computer science students to version control as a "day one" skill, and building upon this throughout the whole first year. This talk will provide some pearls (and perils) from our hard won experience, as well as introduce the open source tool (Repobee) we have developed based on this experience to help any teacher introduce version control as a course management technology.


Intra-Systems Seminar: “I do it because they do it”: Social-Neutralisation in Information Security Practices (23 October, 2019)

Speaker: Saad ALTamimi

Successful implementation of information security policies (ISP) and IT controls play an important role in safeguarding patient privacy in healthcare organizations. Our study investigates the factors that lead to healthcare practitioners’ neutralisation of ISPs, leading to noncompliance. The study adopted a qualitative approach and conducted a series of semi-structured interviews with medical interns and hospital IT department managers and staff in an academic hospital in Saudi Arabia. The study’s findings revealed that the MIs imitate their peers’ actions and employ similar justifications when violating ISP. Moreover, MI team superiors’ (seniors) ISP non-compliance influences MI’s tendency to invoke neutralisation techniques. We found that trust between medical team members is an essential social facilitator that motivates MI’s to invoke neutralisation techniques to justify violating ISP policies and controls. These findings add new insights that help us to understand the relationship between the social context and neutralisation theory in triggering ISP non-compliance.


Systems Seminar: Understanding QUIC Dynamics over Broadband Satellite (16 October, 2019)

Speaker: Gorry Fairhurst

This talk introduces IETF QUIC, a next generation web transport protocol. It presents a set of transport performance measurements using a recent QUIC implementation,Quicly, developed by Fastly. The results will be reviewed in the context of the dynamics of a broadband satellite access system. This considers interactions between the physical layer properties (geostationary orbit delay), radio resource management (access time, asymmetry, and capacity limits) and the transport system provided by QUIC.Our data is drawn from measurements using an operational geostationary satellite service in a study funded by the European Space Agency.


Intra-Systems Seminar: Python Programmers Have GPUs too (16 October, 2019)

Speaker: Dejice Jacob

Python is a popular language for end-user software development in many application domains.  End-users want to harness parallel compute resources effectively, by exploiting commodity manycore technology including GPUs.  However, existing approaches to parallelism in Python are esoteric, and generally seem too complex for the typical end-user developer. We argue that implicit, or automatic, parallelization is the best way to deliver the benefits of manycore to end-users, since it avoids domain-specific languages, specialist libraries, complex annotations or restrictive language subsets.  Auto-parallelization fits the Python philosophy, provides effective performance, and is convenient for non-expert developers.

Despite being a dynamic language, we show that Python is a suitable target for auto-parallelization. In an empirical study of 3000+ open-source Python notebooks, we demonstrate that typical loop behaviour 'in the wild' is amenable to auto-parallelization.  We show that staging the dependence analysis is an effective way to maximize performance. We apply classical dependence analysis techniques, then leverage the Python runtime's rich introspection capabilities
to resolve additional loop bounds and variable types in a just-in-time manner.  The parallel loop nest code is then converted to CUDA kernels for GPU execution.  We achieve orders of magnitude speedup over baseline interpreted execution and some speedup (up to 50x, although not consistently) over CPU JIT-compiled execution, across 12 loop-intensive standard benchmarks.

PREPRINT
http://www.dcs.gla.ac.uk/~jacobd/ALPyNA_Python_Parallelization_DLS19.pdf


Systems Seminar: (How Much) Does a Private WAN Improve Cloud Performance? (10 October, 2019)

Speaker: Vasileios Giotsas

The build-out of private Wide Area Networks (WANs) by cloud providers allows providers to extend their network to more locations and establish direct connectivity with end user Internet Service Providers (ISPs). Tenants of the cloud providers benefit from this proximity to users, which is supposed to provide improved performance by bypassing the public Internet. However, the performance impact of private WANs is not widely understood. To isolate the impact of a private WAN, we measure from globally distributed vantage points to a large cloud provider, comparing performance when using its worldwide WAN and when forcing traffic to instead use the public Internet. The benefits are not universal. While 40% of our vantage points saw improved performance when using the WAN, half of our vantage points did not see significant performance improvement, and 10% had better performance over the public Internet. We find that the benefits of the private WAN tend to improve with client-to-server distance, but that the benefits (or drawbacks) to a particular vantage point depend on specifics of its geographic and network connectivity.


Intra-Systems Seminar: Taming Anycast in a Wild Internet (09 October, 2019)

Speaker: Stephen McQuistin

Anycast is a popular tool for deploying global, widely available systems, including DNS infrastructure and Content Delivery Networks (CDNs). The optimization of these networks often focuses on the deployment and management of anycast sites. However, such approaches fail to consider one of the primary configurations of a large anycast network: the set of networks that receive anycast announcements at each site (i.e., an announcement configuration). Altering these configurations, even without the deployment of additional sites, can have profound impacts on both anycast site selection and round-trip times. In this talk, I'll explore the operation and optimization of anycast networks through the lens of deployments that have a large number of upstream service providers. I'll demonstrate that these many-provider anycast networks exhibit fundamentally different properties than few-provider networks when interacting with the Internet, having a greater number of single AS-hop paths, and reduced dependency on each provider. I'll further examine the impact of announcement configuration changes, demonstrating that in nearly 30% of vantage point groups, round-trip time performance can be improved by more than 25%, solely by manipulating which providers receive anycast announcements. Finally, I’ll describe DailyCatch, an empirical measurement system for testing and validating announcement configuration changes, and demonstrate its ability to influence user-experienced performance on a global, anycast CDN.


Systems Seminar: Building a New World of Anonymisation, Trust and Privacy (25 September, 2019)

Speaker: Bill Buchanan

We have often built systems which lack any form of real trust, and which have little respect for privacy and consent. This presentation will outline some of the methods which can be used to build a more trustworthy world, and will includes the theoretical and practical aspects of zero-knowledge proofs, homomorphic encryption, ring signatures, privacy-preserving machine learning and bulletproofs. Along with this the presentation will outline the key risks that we have around the development of quantum computers and will outline the methods which could be used to overcome the breaking of public key methods. Finally the talk will outline light-weight cryptography methods, and how we can secure devices on limited compatibility devices.


Intra-Systems Seminar: Opening Talk for 2019/2020 (25 September, 2019)

Speaker: Dimitrios Pezaros


Systems Seminar: Selective Applicative Functors (17 April, 2019)

Speaker: Andrey Mokhov

Applicative functors and monads have conquered the world of functional
programming by providing general and powerful ways of describing effectful
computations using pure functions. Applicative functors provide a way to
compose independent effects that cannot depend on values produced by earlier
computations, and all of which are declared statically. Monads extend the
applicative interface by making it possible to compose dependent effects,
where the value computed by one effect determines all subsequent effects,
dynamically.

This talk presents an intermediate abstraction called selective
applicative functors that requires all effects to be declared statically, but
provides a way to select which of the effects to execute dynamically. We
demonstrate applications of the new abstraction on several examples, including
two real-life case studies.


Algorithmic Optimisations for Graph Analytics (20 March, 2019)

Speaker: Hans Vandierendonck

This talk will explore compiler optimisations for graph processing in the context of Pregel-like programming models. Graph processing is a time-consuming and challenging workload due to the size of graph data sets and the irregular memory access patterns that arise from the interconnection pattern of the graph. Key performance bottlenecks relate to random memory access patterns with poor cache locality, and conditional branches that are dependent on long-latency loads. I will present five optimisations that address one or both of these performance bottlenecks: convergence, level-asynchronous execution, deferred updates, unconditional execution and memoisation. The validity of these optimisations depends on the properties of the graph algorithms and I will show how to assess validity. The talk will focus on graph analytics algorithms that can be represented mathematically through sparse linear algebra. In particular, they are defined by a recurrence relation based on sparse matrix-vector multiplication and an application-specific semiring. I will infer sufficient conditions for the correctness of the optimisations based on the properties of the semiring. I will present experimental results that show the performance impact of the optimisations. 
 


Algorithmic Optimisations for Graph Analytics (20 March, 2019)

Speaker: Hans Vandierendonck

This talk will explore compiler optimisations for graph processing in the context of Pregel-like programming models. Graph processing is a time-consuming and challenging workload due to the size of graph data sets and the irregular memory access patterns that arise from the interconnection pattern of the graph. Key performance bottlenecks relate to random memory access patterns with poor cache locality, and conditional branches that are dependent on long-latency loads. I will present five optimisations that address one or both of these performance bottlenecks: convergence, level-asynchronous execution, deferred updates, unconditional execution and memoisation. The validity of these optimisations depends on the properties of the graph algorithms. The talk will focus on graph analytics algorithms that can be represented mathematically through sparse linear algebra. In particular, they are defined by a recurrence relation based on sparse matrix-vector multiplication and an application-specific semiring. I will infer sufficient conditions for the correctness of the optimisations based on the properties of the semiring. I will present experimental results that show the performance impact of the optimisations. 


Data Driven Solutions for Marine Applications (27 February, 2019)

Speaker: Andrea Coraddu

Data Analytics is improving our way to understand complex phenomena as and even faster than a-priory physical models have done in the past. Engineering Systems are composed by many complex elements, their mutual interaction is not easy to model and predict adopting the conventional first principles physics model based on a-priory physical knowledge, because of the significant number of parameters which influence their behaviour. Moreover, state-of-the-art models built upon the physical knowledge of the system may have computational prohibitive requirements. First principles physics models describe the behaviour of systems based on governing physical laws and taking into account their mutual interactions. The higher the detail in the modelling of the physical equations the higher the expected accuracy of the results and the computational time required for the simulation. These models are generally rather tolerant to extrapolation and do not require extensive amount of operational measurements. On the other hand, when employing models that are computationally fast enough to be used for online optimisation, the expected accuracy in the prediction of operational variables is relatively low. Additionally, the construction of the model is a process that requires competence in the field, and availability of technical details which are often not easy to get access.


Data Driven Models, instead, exploit advanced statistical techniques in order to build models directly based on the large amount of historical data collected by the recent advanced automation systems without having any a-priory knowledge of the underlining physical system. Data Driven Models are extremely useful when it comes to continuously monitor physical systems to avoid preventive or corrective maintenance and take decisions based on the actual condition of the system. Unfortunately, Data Driven Models need a large amount of data to achieve satisfying accuracies and this can be a drawback when the data’s collection might requires a stop of the asset. For these reasons, the different modelling philosophies must be exploited in conjunction in order to solve their drawbacks and take the best of each approach.


The seminar will focus on Physical, Data Driven, and Hybrid Models for marine engineering applications. Example and real-life problem will be proposed and analysed, from bearings fault prediction, to energy optimisations, and fuel consumption predictions.


Defining interfaces between hardware and software: quality and performance (21 February, 2019)

Speaker: Alastair Reid

One of the most important interfaces in a computer system is the interface between hardware and software. This talk examines two critical aspects of defining the hardware-software interface: quality and performance.

The first aspect concerns the "radical" idea of creating a single, high-quality, formal specification of microprocessors that everybody can use. This idea does not seem "radical" until you realize that standard practice is for every group to create their own version of a specification in their preferred toolchain. I will describe the challenges that lead to this behavior and how to overcome the challenges. This project lead to the creation of Arm's official formal specification of their microprocessors and to the formal validation of Arm's processors against that specification.

The second aspect concerns the tradeoff between portability and performance in the context of high performance, energy efficient, parallel systems. I will describe the challenges in balancing portability and performance and how I overcame them by defining the hardware-software interface in terms of extensions of the C language. This project played a key part in creation of a software-defined radio system capable of implementing the 4G cellphone protocol.

The Arm architecture is the largest computer architecture by volume in the world; it behooves us to ensure that the interface it describes is appropriately defined.


End-to-end verification using CakeML (21 February, 2019)

Speaker: Magnus Myreen

CakeML is a functional programming language and an ecosystem of proofs
and tools built around the language. The ecosystem includes program
verification tools and a proven-correct compiler that can bootstrap
itself.

In this talk, I will introduce the CakeML project, present its
compiler, and describe how CakeML tools can be used to prove
functional correctness down to the machine code that runs on the
hardware. I will also talk about recent developments, including how we
have proved correctness of a CPU that the CakeML compiler can compile
to; and how we plan to support proofs about space usage of the
compiled programs -- thus proving that programs stay within their
stack and heap limits.

The CakeML project is a collaboration between several sites. Read more
about the project here: http://mort.io


Systems Seminar: Enabling Heterogeneous Network Function Chaining (30 January, 2019)

Speaker: Dr Psoco Tso

Today’s data center operators deploy network policies in both physical (e.g., middleboxes, switches) and virtualized (e.g., virtual machines on general purpose servers) network function boxes (NFBs), which reside in different points of the network, to exploit their efficiency and agility respectively. Nevertheless, such heterogeneity has resulted in a great number of independent network nodes that can dynamically generate and implement inconsistent and conflicting network policies, making correct policy implementation a difficult problem to solve. Since these nodes have varying capabilities, services running atop are also faced with profound performance unpredictability.
 
This talk proposes a Heterogeneous netwOrk Policy Enforcement (HOPE) scheme to overcome these challenges. HOPE guarantees that network functions (NFs) that implement a policy chain are optimally placed onto heterogeneous NFBs such that the network cost of the policy is minimized. We first experimentally demonstrate that the processing capacity of NFBs is the dominant performance factor. This observation is then used to formulate the Heterogeneous Network Policy Placement problem, which is shown to be NP-Hard. To solve the problem efficiently, an online algorithm is proposed. Our experimental results demonstrate that HOPE achieves the same optimality as Branch-and-bound optimization but is 3 orders of magnitude more efficient.
 


Deep Diving into the Security and Privacy of the Fitbit Ecosystem (12 December, 2018)

Speaker: Paul Patras

In this talk I will present an in depth security and privacy analysis of the Fitbit ecosystem. I will first reveal an intricate security through obscurity implementation of the user activity synchronization protocol on early device models. Based on reverse engineering, I will show how sensitive personal information can be extracted in human-readable format, and demonstrate that malicious users can inject fabricated activity records to obtain personal benefits. I will also discuss how attackers can exploit protocol weaknesses to associate nearby victim trackers with a controlled account and subsequently exfiltrate fitness data. Second, I will reveal that the firmware update procedure can be compromised and the code running on devices within wireless range can be modified without consent. Finally, I will discuss how we altered the official smartphone app with the aim of subverting the Fitbit cloud. Although the majority of the vulnerabilities identified have been patched, the lessons learned apply to other Internet of Things applications where the smartphone mediates between user, device, and service.

 

References:

http://homepages.inf.ed.ac.uk/ppatras/pub/imwut18.pdf

http://homepages.inf.ed.ac.uk/ppatras/pub/raid17.pdf

 

Bio:

Paul Patras is a Lecturer and Chancellor's Fellow in the School of Informatics at the University of Edinburgh, where he leads the Internet of Things Research Programme. He received his Ph.D. from University Carlos III of Madrid and held visiting research positions at the University of Brescia, Northeastern University, TU Darmstadt, and Rice University. His research interests include performance optimisation in wireless and mobile networks, applied machine learning, mobile traffic analytics, security and privacy, prototyping and test beds.

 


GenSim: A Toolkit for Efficient Binary Translation (05 December, 2018)

Speaker: Harry Wagstaff

Dynamic binary translation (DBT) and emulation are increasingly useful tools in a wide variety of contexts, from backwards compatibility, to software debugging, to architectural design and prototyping. However, there is usually a requirement for high performance of such tools, which can be difficult to attain without detailed knowledge of DBT and emulation.
This talk introduces GenSim, a toolset developed and used within the University of Edinburgh to aid and accelerate binary translation and simulation research. GenSim consists of an ADL (Architecture Description Language), ADL Compiler, and simulation tools (ArchSim and Captive) which can be used for simulation, or as platforms for DBT research. By using these tools we can achieve performance exceeding that of the state of the art (QEMU), without requiring detailed domain knowledge.


Is it Time for RISC and CISC to Die? (28 November, 2018)

Speaker: Aaron Smith

Abstract: Specialization, accelerators, and machine learning are all the rage. But most of the world's computing today still uses conventional RISC and CISC CPUs, which expend significant energy to achieve high single-thread performance. Von Neumann ISAs have been so successful because they provide a clean conceptual target for software while running a wide range of algorithms reasonably well. We badly need clean new abstractions that utilize fine-grain parallelism and run energy efficiently.

 

Prior work (such as the UT-Austin TRIPS EDGE ISA and others) showed how to form blocks of computation containing limited-scope dataflow graphs, which can be thought of as small structures (DAGs) mapped to silicon. In this talk I will describe work that addresses the limitations of early EDGE ISAs and provide results for two specific microarchitectures developed in collaboration with Qualcomm Research. These early results are based on placed and routed RTL in 10nm FinFET.

 

Bio: Aaron is a part-time Reader in the School of Informatics at the University of Edinburgh and Principal Researcher at Microsoft. In Edi he co-teaches UG3 Compiling Techniques and is working on binary translation and machine learning related projects. At Microsoft he leads a research team investigating hardware accelerators for datacenter services. He is active in the LLVM developer community and a number of IEEE and ACM conferences and has over 50 patents pending.

 


High-Level Hardware Feature Extraction for GPU Performance Prediction of Sencils (27 November, 2018)

Speaker: Michel Steuwer


Beyond Enhancing GPU accelerators in HPC Clusters: What now? (21 November, 2018)

Speaker: Carlos Reaño

Over the past decade, Graphics Processing Units (GPUs) have been adopted in many computing facilities given their extraordinary computing power, which have made it possible to accelerate many general purpose applications from different domains. In this presentation we show different ways of enhancing the use of these accelerators in high-performance computing clusters. Finally, we look to the future to envision what the next stage might be, what could be done with all the experience gained in this field.


Modelling and Verification of Large-Scale Sensor Network Infrastructures (14 November, 2018)

Speaker: Michele Sevegnani

Large-scale wireless sensor networks (WSN) are increasingly deployed and an open question is how they can support multiple applications. Networks and sensing devices are typically heterogeneous and evolving: topologies change, nodes drop in and out of the network, and devices are reconfigured. In this talk, I will present a modelling and verification framework for WSNs based on Bigraphical Reactive Systems (BRS) for modelling, with bigraph patterns and temporal logic properties for specifying application requirements. The bigraph diagrammatic notation provides an intuitive representation of concepts such as hierarchies, communication, events and spatial relationships, which are fundamental to WSNs. The key question we address is how to verify that application requirements are met, individually and collectively, and can continue to be met, in the context of large-scale, evolving network and device configurations. A prototype implementation of our approach is demonstrated through a real-life urban environmental monitoring case-study.

This is joint work with Milan Kabac, Muffy Calder and Julie McCann.

Modelling and Verification of Large-Scale Sensor Network Infrastructures, Michele Sevegnani, Milan Kabac, Muffy Calder and Julie A. McCann, in proceedings of International Conference on Engineering of Complex Computer Systems (ICECCS 2018), IEEE, to appear.

 


Current research on data centre transport protocols and Internet traffic analysis (07 November, 2018)

Speaker: George Parisis

In this talk I will be focussing on our research on data centre transport protocols Internet traffic analysis. I will discuss Polyraptor, a novel data transport protocol that uses RaptorQ (RQ) codes. Polyraptor is tailored for one-to-many and many-to-one data transfer patterns, supports multi-path transport, eliminates Incast and can work well with shallow buffers in network switches. Polyraptor uses RQ codes and follows a receiver-driven approach for flow and congestion control, which is reminiscent to NDP. We have implemented a simulation model of Polyraptor and compared its performance to standard unicast data transport. The experimental results are very promising. I will then discuss our early work on machine-learned transport protocols using reinforcement learning. Finally, I will briefly present our study of a large number of traffic traces from academic, commercial and residential networks using state-of-the-art statistical techniques. We show that the log-normal distribution is a better fit than the Gaussian distribution commonly claimed in the literature. We examine anomalous traces which are a poor fit for all distributions tried and show that this is often due to traffic outages or links that hit maximum capacity. We demonstrate the utility of the log-normal distribution in two contexts: predicting the proportion of time traffic will exceed a given level (for service level agreement or link capacity estimation) and predicting 95th percentile pricing.


Retrofitting a Concurrent Garbage Collector onto OCaml (31 October, 2018)

Speaker: KC Sivaramakrishnan

OCaml is a multi-paradigm programming language that advocates for functional programming but does not prohibit the use of imperative features. Indeed, high-performance real-world code such as MirageOS unikernel libraries and trading systems at Jane Street exhibit careful use of imperative features wrapped under well-behaved functional interfaces. Such networked systems also tend to be latency sensitive where minimising stop-the-world pauses in the OCaml garbage collector is more beneficial than throughput (running fast). As OCaml gears up to get shared memory parallelism support, it is crucial to preserve the desirable performance characteristics of vanilla OCaml. However, the pervasive use of imperative features and the addition of concurrency make this task a difficult one. In this talk, I will present the overall design of Multicore OCaml GC, but also deep dive into a few of the novel techniques that make it efficient. 


Offence and Defence in State Sponsored Cyber Attacks on National Critical Infrastructures (24 October, 2018)

Speaker: Christopher Johnson

In this talk, I will provide a brief overview of recent cyber attacks on
national critical infrastructures -
focussing mainly on aviation and energy generation.  I will describe how
UK policy has changed in response
to these threats and introduce two new projects working with GCHQ/NCSC to
improve our defences.  The first
identifies metrics to assess the resilience of supply chains to cyber
attacks in industries affected by the
Network and Information Systems (NIS) directive.  The second is opening
the industrial cyber lab in the School to
provide access to a range of government and commercial organisations -
developing a pedagogy for forensics in
networks and architecture that are very different from more conventional
TCP/IP environments.


Meeting Sociey Challenges: Big Data Driven Approaches (05 October, 2018)

Speaker: Liangxiu Han

By 2020, the total size of digital data generated by social networks, sensors, biomedical imaging and simulation devices, will reach an estimated 44 Zettabytes (i.e. 44 trillion gigabytes) according to IDC reports. This type of 'big data', together with the advances in information and communication technologies such as Internet of things (IoT), connected smart objects, wearable technology, ubiquitous computing, is transforming every aspect of modern life and bringing great challenges and spectacular opportunities to fulfill our dream of a sustainable smart society.

This talk will focus on our recent research and development based on big data driven approaches to address society challenges through several real case studies in various application domains such as Health, Food, Smart Cities etc.


Algebraic Graphs (03 October, 2018)

Speaker: Andrey Mokhov

Abstract: Are you tired of fiddling with sets of vertices and edges when working with graphs? Would you like to have a simple algebraic data type for representing graphs and manipulating them using familiar functional programming abstractions? In this talk, we will learn a new way of thinking about graphs and a new approach to working with graphs in a functional programming language like Haskell. The ideas presented in the talk are implemented in the Alga library: . I hope that after the talk you will be able to implement a new algebraic graph library in your favourite programming language in an hour.

 

Speaker: Andrey Mokhov is a senior lecturer in computer engineering at Newcastle University and a Royal Society Industry Fellow. Andrey is interested in applying abstract mathematics and functional programming to industrial engineering problems. In 2015 he was a visiting researcher at Microsoft Research Cambridge redesigning the build system of the Glasgow Haskell Compiler; this project is currently continued as part of a 4-year Royal Society research fellowship dedicated to studying build systems in general.

 


Experts need not apply (26 September, 2018)

Speaker: Pavlos Petoumenos

Stagnating single core performance is driving us towards more complex hardware configurations. Extracting all available performance from such systems is not a straightforward task. It involves trial and error and expert knowledge about how hardware and software interact. Development tools have failed to keep up with this challenge. Even for simple optimisation problems, they might fail to extract three quarters of the available performance. Unless we find radical new ways for analysing and optimising applications, the gap between possible and typically achievable performance will only widen. 

My research aims at bridging this gap with analysis and optimisation methodologies which are fast, easy to use, and require no supervision or expert guidance. My earlier work used state capturing and lightweight instrumentation to evaluate optimisation decisions without any input from the developer or the user. My current research goes even further, with deep language learning to test, analyse, and improve complex code. This line of work has the potentially of  permanently changing how we create optimisation and analysis heuristics.


Efficient Cross-architecture Hardware Virtualisation (19 September, 2018)

Speaker: Tom Spink

Abstract:

Virtualisation is a powerful tool used for the isolation, partitioning,  and sharing of physical computing resources. Employed heavily in data centres, becoming increasingly popular in industrial settings, and used by home-users for running alternative operating systems, hardware virtualisation has seen a lot of attention from hardware and software developers over the last ten–fifteen years.

From the hardware side, this takes the form of so-called hardware assisted virtualisation, and appears in technologies such as Intel-VT, AMD-V and ARM Virtualization Extensions. However, most forms of hardware virtualisation are typically same-architecture virtualisation, where

virtual versions of the host physical machine are created, providing very fast isolated instances of the physical machine, in which entire operating systems can be booted. But, there is a distinct lack of hardware support for cross-architecture virtualisation, where the guest machine architecture is different to the host.

I will talk about my research in this area, and describe the cross-architecture virtualisation hyper-visor Captive that can boot unmodified guest operating systems, compiled for one architecture in the virtual machine of another.

I will talk about the challenges of full system simulation (such as memory, instruction, and device emulation), our approaches to this, and how we can efficiently map guest behaviour to host behaviour.

Finally, I will discuss our plans for open-sourcing the hypervisor, the work we are currently doing and what future work we have planned.

 


Machine Learning in Compiler Optimisation (12 September, 2018)

Speaker: Zheng Wang

Developing an optimising compiler is a highly skilled and arduous process and there is inevitably a software delay whenever a new processor is designed. It often takes several generations of a compiler to start to effectively exploit the processors' potential, by which time a new processor appears and the process starts again. This never-ending game of catch-up means that we rarely fully exploit a shipped processor and it inevitably delays time to market. As we move to multi- and many-core platforms, this problem increases.

 

This talk will look at some of our award-winning studies on using machine learning to automate the design process of compiler optimisation heuristics.  It will demonstrate how machine learning can be employed to reduce expert involvement in compiler design yet yielding significantly better performance than hand-tuned heuristics.



Bio:

Zheng Wang is a Senior Lecturer at Lancaster University where he leads the distributed systems group. He develops methods and builds systems to allow computer to adapt to the ever-changing environments. His research spans code optimisations, performance, energy efficiency, and systems security. His previous work has won three best paper awards and two best presentation awards at prestigious conferences at compilation and parallel computing.

 


TBC (29 May, 2018)

Speaker: Peter Inglis

tbc


TBC (22 May, 2018)

Speaker: Femi Olukoya

tbc


Virtualized Environment Memory Management for Future System Architectures (16 May, 2018)

Speaker: Paul V. Gratz

Hardware virtualization is a major component of large scale server and

data center deployments due to its facilitation of server

consolidation and scalability.  Virtualization, however, comes at a

high cost in terms of system main memory utilization.  Current virtual

machine (VM) memory management solutions impose a high performance

penalty and are oblivious to the operating regime of the system.

Therefore, there is a great need for low-impact VM memory management

techniques which are aware of, and reactive to, current system state

to drive down the overheads of virtualization.  Further, as new memory

technologies become available in the cloud, traditional systems and

hypervisor software must be adapted to the changing systems

architectures to achieve optimal performance and efficiency. This talk

will examine techniques to address these challenges in memory

management for virtualized environments.  First, we observe that the

host machine operates under different memory pressure regimes, as the

memory demand from guest VMs changes dynamically at runtime.  Adapting

to this runtime system state is critical to reduce the performance

cost of VM memory management.  We propose a novel dynamic memory

management policy called Memory Pressure Aware (MPA) ballooning.  MPA

ballooning dynamically allocates memory resources to each VM based on

the current memory pressure regime.  Moreover, MPA ballooning

proactively reacts and adapts to sudden changes in memory demand from

guest VMs.  Next, I will discuss our work characterizing the impact of

current OS timeslice behavior modern, shared, large last-level caches.

Here we show that, counter to recent trends shortening timeslices,

large last-level caches require lengthened timeslices to amortize

their fill time.



BIO:

 

Paul V. Gratz is an Associate Professor in the department of

Electrical and Computer Engineering at Texas A&M University, currently

visiting the University of Edinburgh on sabbatical.  His research

interests include efficient and reliable design in the context of high

performance computer architecture, processor memory systems and

on-chip interconnection networks.  He received his B.S. and

M.S. degrees in Electrical Engineering from The University of Florida

in 1994 and 1997 respectively.  From 1997 to 2002 he was a design

engineer with Intel Corporation.  He received his Ph.D. degree in

Electrical and Computer Engineering from the University of Texas at

Austin in 2008.  His papers "Path Confidence based Lookahead

Prefetching" and "B-Fetch: Branch Prediction Directed Prefetching for

Chip-Multiprocessors" were nominated for best papers at MICRO '16 and

MICRO '14 respectively.  At ASPLOS '09, Dr. Gratz received a best

paper award for "An Evaluation of the TRIPS Computer System."  In 2016

he received the "Distinguished Achievement Award in Teaching - College

Level" from the Texas A&M Association of Former Students and in 2017

he received the "Excellence Award in Teaching, 2017" from the Texas

A&M College of Engineering.

 


TBC (15 May, 2018)

Speaker: Dejice Jacob

tbc


Curried C++ Template Metaprogramming (09 May, 2018)

Speaker: Paul Keir

Rather than solely confirming aspects of correctness, types hold the potential to be the primary input in a program's construction. As a Turing complete sublanguage, C++ templates can run arbitrarily complex code at compile time. C++ templates are also a strict, functional language, though this aspect can be marginalised, owing to weak support for higher-order (meta)functions; and other omissions such as currying and type classes. In this talk we introduce a small library to allow idiomatic higher-order C++ metafunction classes to be implicitly curried, and demonstrate its application to a selection of interesting folds; with the assistance of the tacit/pointfree paradigm.

 


TBC (08 May, 2018)

Speaker: Ana Ibrahim

tbc


Novel certification challenges to airworthiness (02 May, 2018)

Speaker: Paul Caseley

New technologies pose evaluation challenges for airworthiness for future aircraft and their supporting systems. The UK Ministry of Defence has been investigating some of these new challenges through a 3 year research program which included academic and industry contributions. This presentation discusses some of the proposed and implemented solutions and ongoing challenges the research has highlighted for safety practitioners, developer policy holders and regulators. Topic areas for discussion include: Additive Manufacture (evidence based qualification /certification guidance), Multi Core processing environments (evaluating performance and provision of tools), Data Driven Safety (increased reliance on data in airworthiness, guidance strategies to evaluate), Pilot Substitution Functions (assisting operators through autonomous functions and reasoning of the system architectures including possible non deterministic functions), Mitigating Cyber – strategies to evaluate application of guidance / standards for threats and vulnerabilities affecting airworthiness.

 


TBC (01 May, 2018)

Speaker: Wing Li

tbc


On Uncoordinated Service Placement in Edge-Clouds, IEEE CloudCom 2017 (25 April, 2018)

Speaker: Ioannis Psaras

Abstract: Edge computing has emerged as a new paradigm to bring cloud applications closer to users for increased performance. ISPs have the opportunity to deploy private edge-clouds in their infrastructure to generate additional revenue by providing ultra-low latency applications to local users. We envision a rapid increase in the number of such applications for “edge" networks in the near future with virtual/augmented reality (VR/AR), networked gaming, wearable cognitive assistance, autonomous driving and IoT analytics having already been proposed for edge-clouds instead of the central clouds to improve performance. This raises new challenges as the complexity of the resource allocation problem for multiple services with latency deadlines (i.e., which service to place at which node of the edge-cloud in order to satisfy the latency constraints) becomes significant. In this talk, we will propose a set of practical, uncoordinated strategies for service placement in edge-clouds. Through extensive simulations using both synthetic and real-world trace data, we will demonstrate that uncoordinated strategies can perform comparatively well with the optimal placement solution, which satisfies the maximum amount of user requests.
 
Link to related paper: https://www.ee.ucl.ac.uk/~uceeips/files/cloudcom17-uncoordinated-service-placement.pdf
 
Bio: Ioannis Psaras is an EPSRC Early Career Fellow and Lecturer of Computer Networks at the Electrical and Electronic Engineering Department of UCL. His interests are in the areas of Internet routing and congestion control, Information-Centric Networks, Edge- and Fog-Computing and Mobile, Opportunistic Networks. Lately, he has been investigating the applications of Distributed Ledger Technology and Blockchains to networking problems. He is generally interested in resource allocation and management of both existing and future Internet architectures.
 
Ioannis has received four (4) Best Paper Awards for his contributions to high-quality conferences and workshops, all in the area of Information-Centric Networks and mobile communications. He serves in the committees of tens of top-quality conferences, workshops and think-tanks in the general area of networking. He has been actively involved in the ICN Research Group (ICNRG) of the Internet Research Task Force (IRTF) from its very first days. He has co-authored one of the first Internet-Drafts produced by the group on the "ICN Research Challenges", which has now become RFC 7927.
 
He has been awarded and leads a number of EU- and UK-funded projects, with total budget of more than €2M. All of his recent projects focus on Information-Centric Networking, mobile ad hoc communications, edge- and fog-computing and blockchains. More information can be found at: https://www.ee.ucl.ac.uk/~ipsaras/


TBC (24 April, 2018)

Speaker: Saad Nasser S Altamimi

tbc


SGXBounds: Memory Safety for Shielded Execution (18 April, 2018)

Speaker: Pramod Bhatotia

In this talk, I will present our work on how to efficiently
ensure memory safety for shielded execution in the untrusted environment
of cloud.

Code: https://github.com/tudinfse/sgxbounds


TBC (17 April, 2018)

Speaker: Colin Perkins

tbc


TBC (10 April, 2018)

Speaker: Tim Storer

tbc


TBC (27 March, 2018)

Speaker: Ibrahim Alghamdi

tbc


Accelerating Deep Neural Networks on Low Power Heterogeneous Architectures (21 March, 2018)

Speaker: Jose Cano Reyes

Deep learning applications are able to recognise images and speech with great accuracy, and their use is now everywhere in our daily lives. However, developing deep learning architectures such as deep neural networks in embedded systems is a challenging task because of the demanding computational resources and power consumption. Hence, sophisticated  algorithms and methods that exploit the hardware of the embedded systems need to be investigated. This paper is our first step towards examining methods and optimisations for deep neural networks that can leverage the hardware architecture of low power embedded devices. In particular, in this work we accelerate the inference time of the VGG-16 neural network on the ODROID-XU4 board. More specifically, a serial version of VGG-16 is parallelised for both the CPU and GPU present on the board using OpenMP and OpenCL. We also investigate several optimisation techniques that exploit the specific hardware architecture of the ODROID board and can accelerate the inference further. One  of these optimisations uses the CLBlast library specifically tuned for the ARM Mali-T628 GPU present on the board. Overall, we improve the inference time of the initial serial version of the code by 2.8X using OpenMP, and by 9.4X using the most optimised version of OpenCL.


TBC (20 March, 2018)

Speaker: Abeer Ali

tbc


TBC (13 March, 2018)

Speaker: Mohammed Alhamed

tbc


Solving the Task Variant Allocation Problem in Distributed Robotics (06 March, 2018)

Speaker: Anna Lito Michala

 We consider the problem of assigning software processes (or
tasks) to hardware processors in distributed robotics environments. We
introduce the notion of a task variant, which supports the adaptation
of software to specific hardware configurations. Task variants
facilitate the trade-off of functional quality versus the requisite
capacity and type of target execution processors. We formalise the
problem as a mathematical model that incorporates typical constraints
found in robotics applications; the model is a constrained form of a
multi-objective, multi-dimensional, multiple-choice knapsack problem.
We propose and evaluate three different solution methods to the
problem: constraint programming, a greedy heuristic and a local search
metaheuristic. We demonstrate the use of task variants in a real
interactive multi-agent navigation system, showing that constraint
programming improves the systems quality of service, as compared to the
local search metaheuristic, the greedy heuristic and a randomised
solution, by an average of 16%, 31% and 56% respectively.


TBC (27 February, 2018)

Speaker: Wim Vanderbauwhede

tbc


Safe and Efficient Data Representations for Dynamic Languages with Shared-Memory Parallelism (21 February, 2018)

Speaker: Stefan Marr

The performance of dynamic languages improved ever since the days of Self. Even so they provide a plethora of features seemingly at odds with an efficient

implementation, modern VMs execute programs written in JavaScript, Python,

Ruby, or Smalltalk as fast as other less dynamic languages. However, there

remains one domain where dynamic languages haven't reached their more

conservative counterparts: shared-memory parallelism.

 

So far, the fewest dynamic language implementations shed their global

interpreter locks, which allow for simple and efficient data structures for

objects or collections because data races don't have to be considered. The few

implementations that did, unfortunately expose applications to data races

originating in the VM.

 

This talk presents work on safe data representations for objects and built-in

collections that neither prevent parallel execution nor expose data races

originating in VM-level implementation choices. We show that it is possible to

avoid any overhead on single threaded code as well as making the data

structures scalable for parallel use cases.

 


Translating system models and paradigms (20 February, 2018)

Speaker: Tom Wallis

When we architect a new system, optimise an existing one, or investigate a system’s failure, we build models. Being able to represent and analyse what we build is essential. Unfortunately, different types of analysis require models of different kinds, and for very large systems the process of capturing and analysing that system model can be intractable.
One method to circumvent this is to translate between different sorts of system model. This way, we can capture information in a sensible format, but convert to something more sensible when performing analysis, and we already do this to a small extent when the paradigms being converted between are relatively similar. However, vastly different kinds of model — representing different system properties selected from a gamut of behaviours, traits, and environmental properties such as models of time — are difficult to convert. Why is this? What is the current state of the art? What are the potential risks? And how might we go about performing these translations?


A Design-Driven Methodology for the Development of Large-Scale Orchestrating Applications (14 February, 2018)

Speaker: Milan Kabáč

Our environment is increasingly populated with large amounts of smart objects to monitor free parking spaces, analyze material conditions in buildings, detect unsafe pollution levels in cities, etc. These massive amounts of sensing and actuation devices constitute large-scale infrastructures that span over entire parking lots, campuses of buildings or agricultural fields. Despite the fact that large-scale sensor infrastructures have been successfully deployed in a number of domains, the development of applications for these infrastructures remains challenging. In particular, considerable knowledge about the hardware/network specificities of the sensor infrastructure is required on the part of the developer. To address this issue, software development methodologies and tools raising the level of abstraction need to be introduced to allow non-expert developers program applications.


Why aren't more users more happy with our VMs? (07 February, 2018)

Speaker: Laurence Tratt

Programming language Virtual Machines (VM)s are now widely used, from server applications to web browsers. Published benchmarks regularly show that VMs can optimise programs to the same degree as, and often substantially better than, traditional static compilers. 

Yet, there are still people who are unhappy with the VMs they use. Frequently their programs don't run anywhere near as fast as benchmarks suggest; sometimes their programs even run slower than more naive language implementations. Often our reaction is to tell such users that their programs are "wrong" and that they should fix them.

This talk takes a detailed look at VM performance, based on a lengthy experiment: we not only uncovered unexpected patterns of behaviour, but found that VMs perform poorly far more often than previously thought. I will draw on some of my own experiences to suggest how we may have gotten into such a pickle. Finally, I will offer some suggestions as to how we might be able to make more VM users more happy in the future.

 


YewPar: A Framework for Scalable Re-useable Parallel Tree Search (06 February, 2018)

Speaker: Blair Archibald

Tree searches are used to solve many problems, from exploring mathematical
objects, scheduling factories, to AI applications. While these problems in theory
lend themselves well to parallelism, often this is performed on a
per-application or per-scale basis. Given the importance of such problems we
advocate a more general approach: leveraging parallel algorithmic skeletons for
cross platform, cross domain implementations.
 
This talk looks at general methods for parallelising tree searches, focusing on
YewPar, a new C++ framework for parallel tree search. We look at the design of a
general purpose tree search API, the implementation of YewPar, and the design of
high performance skeletons targeting large(-ish) clusters (and beyond!).


How can applications benefit from NVRAM technology? Evaluation Methodology and Testing (31 January, 2018)

Speaker: Juan Herrera

The objective of NEXTGenIO, an EC H2020 project, is to design and implement a platform that can address the challenge of delivering scalable I/O performance to applications at the Exascale. NVRAM is used to reduce the latency gap between memory and storage. In order to evaluate the platform's effectiveness regarding I/O performance and throughput, a set of eight memory and I/O-bound applications have been selected. In this talk, the methodology for testing the NEXTGenIO platform will be presented. One of the topics to be discussed is how NVRAM will impact the end-to-end performance of these applications.


TBC (30 January, 2018)

Speaker: Niall Barr

tbc


Brain Computer Interface for Neurorehabilitation and Inclusive Gaming (24 January, 2018)

Speaker: Aleksandra Vuckovic

Two major clinical applications for Brain Computer Interfaces are for assistive devices and for neurorehabilitation. In my talk I will present research activities of Rehabilitation Engineering group in these areas. First part of the talk will be dedicated to two clinical applications that I’ve been working on for a number of years, rehabilitation of hand function and treatment of chronic pain in people with spinal cord injury. I will present our Impact Case study, continuation from REF 2014, development of BCI software for portable home based applications as medical device class 1 (ISO 62304) and first steps towards creating BCI service design. Following this, I will present some initial results and plans for the future for BCI inclusive serious games for rehabilitation and entertainment. I am looking forward to discuss areas of possible collaborations.


Sigma16: A computer architecture for teaching and research (23 January, 2018)

Speaker: John O'Donnell

Sigma16 is a computer design intended for teaching, at both elementary and advanced levels, and also as an experimental platform for research.  It is currently being used in four courses at Glasgow.  The rationale for using an artificial design rather than a commercial architecture will be discussed.  The architecture is designed to allow a very simple subset to be used for introductory courses, but also to support a more sophisticated presentation of the relationship between computer architecture, digital circuit design, programming languages and operating systems.

 


On the Effort to Build Efficient Deep Learning Systems (13 December, 2017)

Speaker: Partha Maji

The recent progress in deep learning technology helped to advance the state-of-the-art in many areas, including computer vision, speech recognition, and natural language processing. As this technology becomes more mature, there is a reason to believe that it will enable many emerging applications and improve human abilities. But, to realize its true potential, we also need to improve traditional machine architectures and platform hardware.
 
In this talk, we will discuss the challenges involved in implementing deep learning technology and study some of the emerging neural network workloads. We will start with a brief introduction to the field of deep networks. We will then look at how model-architecture co-design may help to design systems more efficiently than tackling models and hardware in isolation. As a case study, we will discuss some example architectures from the industry, as well as explore many new emerging approaches.


Quantum Computing and Computational Fluid Dynamics: potential applications and simulation on classical HPC systems (06 December, 2017)

Speaker: Rene Steijl

In recent years the field of quantum computing (QC) has grown into an active and diverse field of research and significant progress has been made with building quantum computers. For a small number of applications, quantum algorithms have been developed that would lead to a significant speed-up relative to classical methods when executed on a suitable quantum computer. Despite this research effort, progress in defining suitable applications for quantum computers has been relatively limited and two decades after their invention, Shor's algorithm for factoring composite integers and Grover's algorithm for quantum search are still among the main applications.
In the present work, we investigate the potential of quantum computing and suitably designed algorithms for future computational fluid dynamics (CFD) applications. In the absence of the required quantum hardware, large-scale parallel simulations on 'classical' parallel computers are required in developing such algorithms. The presentation will cover a number of quantum algorithms which can potentially be used effectively as part of larger CFD algorithms/methods. Errors introduced by quantum decoherence, gate errors as well as uncertainties introduced by quantum measurement operations all need to be accounted for in the analysis and design of new quantum algorithms for practical use. A parallel quantum computing simulator was developed as part of the present investigation. Challenges and results from simulations on HPC facilities form the second main aspect of this presentation. Finally, ideas and prospects for future developments will be presented.


Structural and Behavioural Types for SoC Design: Motivations and Challenges (05 December, 2017)

Speaker: Jan de Muijnck-Hughes

The Border Patrol Project seeks to investigate how state-of-the-art
advances in programming language theory can provide better guarantees
towards System-On-Chip (SoC) design and execution. Specifically we are
interested in extending existing work on structural type systems for
SoC with behavioural information. Specifically, we are looking to
incorporate Value-Dependent Multi-Party Session Types using dependent
types.

In this talk I will aim to: discuss the goals of the Border Patrol
Project; outline some of the design challenges we have encountered so
far when looking to adapt multi-party session types for describing
hardware; and demonstrate how dependent types can help reason about the
structure of SoC architectures.


Towards High-Performance Code Generation for Streaming Applications on FPGA Clusters (28 November, 2017)

Speaker: S Waqar Nabi

High-performance computing on heterogeneous platforms in general and those with FPGAs in particular presents a significant programming challenge. We contend that compiler technology has to evolve to automatically optimize applications by transforming a given original program. We are developing a novel methodology based on “TYpe TRAnsformations” (TyTra) of a given scientific kernel.

I am going to talk about the overall TyTra framework, with a focus on the memory access optimizations required to maintain “streaming” on the FPGAs, which is essential to get performance out of these devices. A “2d-shallow-water” scientific model will be used as an illustration and I will discuss some recent results. 

I will also discuss my experience of visiting the CHREC laboratory at the University of Florida this summer as part of a HiPEAC collaboration grant. I will talk a little about their FPGA cluster(s), and how working with them informs the development the TyTra optimizing compiler.  


Design Patterns for Robustness in Community Network Infrastructure (22 November, 2017)

Speaker: William Waites

In rural and remote parts of Scotland, as elsewhere, grass-roots collaborative infrastructure projects are filling the gaps left by the incumbent carrier and mobile providers. This is typically done on a shoestring budget, with variously skilled labour, using inexpensive off the shelf equipment. This lightweight approach allows for rapid development but some care must be taken to allow such networks to operate at scale over a large geographical area. We survey several exemplar networks on the West Coast of Scotland. Taking inspiration from work on the synthesis of reliable systems from unreliable components, and the high-level structure of the Internet, we show how these small community networks can be connected in a way that, in aggregate, is robust against various failure modes, technical, organisational, and regulatory.


End-host Driven Troubleshooting Architecture for Software-Defined Networks (21 November, 2017)

Speaker: Levente Csikor

The high variability in traffic demands, the advanced networking services at various layers (e.g., load-balancers), and the steady penetration of SDN technology and virtualization make the crucial network troubleshooting tasks ever more challenging over multi-tenant environments.
Service degradation is first realized by the users and, as being the only one having visibility to many relevant information (e.g., connection details) required for accurate and timely problem resolution, the infrastructure layer is often forced upon continuous monitoring resulting in wasteful resource management, not to mention the long time frames. 
In this talk, I will propose an End-host-Driven Troubleshooting architecture (EDT), where users are able to share the application-specific connection details with the infrastructure to accelerate the identification of root causes of performance degradation, and to avoid the need for always-on, resource-intensive, and network-wide monitoring. 
Utilizing EDT, I will show some essential tools for real end-to-end trace routing (PTR), identifying packet losses, and carry out hop-by-hop latency measurements (HEL).


Energy Consumption per Operation in a Deep Learning Neural Network (15 November, 2017)

Speaker: Shufan Yang

A wide range of video/vision applications including robotics, advanced driver assistance and autonomous vehicles currently require high performance processing for object recognition. Many popular deep learning based object detection frameworks are quite impressive, however these frameworks still require very high computation power. With the performance/power/area ratio limited in embedded systems, this poses an interesting problem. To build a quicker and more accurate video processing system, we have constructed a CPU/GPGPU/FPGA hybrid system to provide a flexible solution that combines software and hardware programmability to investigate energy consumptions. This talk will cover our latest cross-cutting software and hardware programmable approach to address the performance/power/area ratio challenges posed by convolutional neural networks based on complex machine vision applications.

 

 

Bio: Dr Shufan Yang received a Ph.D. degree in Computer Science from University of Manchester in 2010, under supervisor Professor Steve Furber. She is currently a lecturer in the School of Engineering at University of Glasgow. Prior to joining UoG, she was a Post-doc at one of European largest Intelligent Robotic group (ISRC) in University of Ulster from 2010 to 2012. Her research interests include System-on-Chip, Machine Vision and the implementation of reconfigurable architectures. She has published over 50 journal and conference papers. She has joined research projects in SpiNNaker, EUFP7 IM-CleveR and EU Si Elegans. Her research has been sponsored by DstL, EU and NCFS as well as industry partners from Rolls-Royce, Xilinx, ARM, NVidia and TIC Clean Companies.

 


From Monoliths to Microservices (14 November, 2017)

Speaker: Mircea Iordache

Network security has been struggling to protect modern networks because of non-parallelizable behaviour and monolithic functionality that doesn’t perform well in distributed environments due to design, often impacting overall performance for network users. Reimplementing security applications to conform to modern standards is unfeasible due to the scale of the undertaking, so I propose an alternative based on controlling behaviour to create flexible microservices that improve user-to-service latency and network infrastructure utilisation.


Caching the Internet (07 November, 2017)

Speaker: Marcel Flores

Content Delivery Networks (CDNs) are a core component in the delivery of Internet content to end users and employ a variety of caching policies to achieve fast and reliable delivery. While cache optimization has been a significant topic in the past, prior work has mostly focused on specific use-cases in single-tenant environments with distinct workloads.
 
In this talk, I’ll explore the efficacy of popular caching policies in a large-scale, global, multi-tenant CDN.  We examine the client behaviors observed in a network of over 100 high-capacity super-PoPs.  Using production data sets, we show that for such a large-scale and diverse use case, simpler caching policies dominate. 


Towards a predictable cloud (01 November, 2017)

Speaker: Thomas Karagiannis

Abstract:

These are exciting times for technologies in the cloud. One of the key requirements for high-performant applications in today’s multi-tenant datacenters is performance predictability, a traditionally elusive property for shared resources like the network or storage.  Yet, online services running in infrastructure datacenters need such predictability to satisfy applications SLAs. Cloud datacenters require guaranteed performance to bound customer costs and spur adoption. In this talk, through the story of the Predictable Datacenters Project at Microsoft Research which resulted in key QoS features in Windows Server, I will describe how simple abstractions and mechanisms can offer predictable performance for shared cloud resources like the network and even storage. Finally, I will shortly discuss projects that the group in Cambridge is focusing on.

 Bio: 

Thomas Karagiannis is a senior researcher with the Systems and Networking group of Microsoft Research Cambridge, UK. His research interests span most aspects of computer communications and networks with his current focus being on data centers and the cloud. His past work spans Internet measurements and monitoring, network management, home networks and social networks. He holds a Ph.D. degree in Computer Science from the University of California, Riverside and B.S at the Applied Informatics department of the University of Macedonia, in Thessaloniki, Greece. Thomas has published several papers in the premier venues for computer communications and networking and has served in several of the corresponding technical program committees.

 


Yesterday my Java profiler worked. Today it does not. Why? (31 October, 2017)

Speaker: Jeremy Singer

The Java virtual machine (JVM) and its hosted programming languages are evolving rapidly. Unfortunately there are two side effects. (1) Quantitative studies of characteristic behaviour are quickly outdated. (2) JVM profiling requires constant tool maintenance effort. This presentation explores how to make JVM profiling great again.


Software-Defined Datacenter Network Debugging (25 October, 2017)

Speaker: Myungjin Lee

Datacenter network debugging is complex. Existing network debuggers are even
more complex, requiring in-network techniques like dynamic switch rule
updates, collecting per-packet per-switch logs, collecting data plane
snapshots, packet mirroring, packet sampling, traffic replay, etc.

In this talk, I will call for a radically different approach: in contrast to
existing tools that implement the functionality entirely in-network (i.e., on
network switches), we should carefully partition the debugging tasks between the
edge and the network. To that end, I present PathDump, a minimalistic network
debugger that achieves two main goals: (1) implement a large fraction of
published network debugging functionalities using the network edge only; and (2)
for functionalities that cannot entirely be implemented at the edge, use
debugging at the edge to reduce the problem to a small subset of the network. In
particular, I will discuss the design, implementation and evaluation of PathDump
that runs over a real network comprising only of commodity network components.


Mapping an Anycast CDN Using RIPE Atlas (24 October, 2017)

Speaker: Stephen McQuistin

Anycast CDNs announce the same IP address blocks from different physical sites, or Points-of-Presence (PoPs). They then rely upon Internet routing to map clients to PoPs, creating catchments: the set of clients that map to a given PoP.  Optimisation of these catchments is important, as performance, scalability, and resilience are reduced by poor catchments (e.g., clients connecting to distant PoPs). Exploring new anycast configurations requires changes in anycast announcements, and understanding the impact of these changes is challenging. A large, diverse set of vantage points is required for coverage, but this makes it difficult to surface changes that are most significant to the CDN. In this talk, I’ll describe a methodology for mapping anycast catchments and evaluating changes in anycast configuration at a large CDN.


Device Architectures, Networks and Applications: A Semiconductor Perspective (18 October, 2017)

Speaker: Tim Summers

Semiconductor technology has remained supremely dominant over the last 30 years as the platform for enabling a massive global electronics industry.
 
This lecture will review some of the major semiconductor device and system architectural advances that have been necessary to accommodate the many new applications we have today. Device power, transistor geometry reduction, multi core architectures and software considerations  will be addressed. Finally a brief view of some future industry applications and their possible impact on device architectures will be presented.


The Lift Project: Performance Portable Parallel Code Generation via Rewrite Rules (17 October, 2017)

Speaker: Michel Steuwer, Adam Harries, Naums Mogers, Federico Pizzuti, Toomas Remmelg, Larisa Stoltzfus

The Lift project aims at generating high-performance code for parallel processors from a portable high-level program. Starting from a single high-level program an optimisation process based on rewrite rules transforms the portable program into highly-specialised low-level code delivering high-performance.

This talk will motivate the indispensability of performance portability given the increasing pace of the development of specialised hardware such as GPUs, FPGAs, or Google's TPU. After a brief introduction of the core aspects of Lift, the Lift team will give an overview of our ongoing research of using Lift for accelerating areas such as machine learning, physics simulations, graph algorithms, and linear algebra.


Improving Fuzzing with Deep Learning (10 October, 2017)

Speaker: Martin Sablotny

Today’s software products are complex entities with many functions. Those functions have to be tested thoroughly in order to prevent security issues. In the modern software development process fuzzing an important role in finding security related bugs.
Nonetheless developing generation based fuzzers for complex input formats is a time consuming work and requires a lot of knowledge about those formats. This work focuses on the use of  deep learning algorithms in order to create HTML tags, which are combined to test cases and executed inside a browser. First results have shown that it is possible to learn the format from a generation based fuzzer and outperform it in terms of code coverage.


Winning the War in Memory (27 September, 2017)

Speaker: Prof Simon Moore

Memory safety bugs result in many vulnerabilities in our computer systems allowing exploits including recent security breaches: WannaCry, HeartBleed, CloudBleed and StackClash.  To fundamentally improve computer system resilience to these attacks, we propose a new processor (CHERI) together with compiler and operating system support that mitigate these bugs with few changes to applications.  CHERI provides fine grained memory protection using a new hardware supported type: the capability.  Capabilities provide hardware enforced provenance, integrity and bounds checking for code and data references.  We demonstrate how (code, data) capability pairs can be used for highly scalable and performant compartmentalisation.  Efficient compartmentalisation allows the principle of least privilege to be widely applied, mitigating both known and unknown attacks.  Though these changes to computer systems are radical, there is a clear adoption path and we are currently working with major commercial partners to transition the technology.


Towards Composable GPU Programming: Programming GPUs with Eager Actions and Lazy Views (26 September, 2017)

Speaker: Michel Steuwer

In this work, we advocate a composable approach to programming systems with Graphics Processing Units (GPU): programs are developed as compositions of generic, reusable patterns. Current GPU programming approaches either rely on low-level, monolithic code without patterns (CUDA and OpenCL), which achieves high performance at the cost of cumbersome and error-prone programming, or they improve the programmability by using pattern-based abstractions (e.g., Thrust) but pay a performance penalty due to inefficient implementations of pattern composition.

We develop an API for GPUs based programming on C++ with STL-style patterns and its compiler-based implementation. Our API gives the application developers the native C++ means (views and actions) to specify precisely which pattern compositions should be automatically fused during code generation into a single efficient GPU kernel, thereby ensuring a high target performance. We implement our approach by extending the range-v3 library which is currently being developed for the forthcoming C++ standards. The composable programming in our approach is done exclusively in the standard C++14, with STL algorithms used as patterns which we re-implemented in parallel for GPU. Our compiler implementation is based on the LLVM and Clang frameworks, and we use advanced multi-stage programming techniques for aggressive runtime optimizations.

We experimentally evaluate our approach using a set of benchmark applications and a real-world case study from the area of image processing. Our codes achieve performance competitive with CUDA monolithic implementations, and we outperform pattern-based codes written using Nvidia’s Thrust.


Designing Processors to Accelerate Robot Motion Planning (20 September, 2017)

Speaker: Prof. Daniel J. Sorin

We have developed a hardware accelerator for motion planning, a critical operation in robotics. I will  present the microarchitecture of our accelerator and describe a prototype implementation on an FPGA. Experimental results show that, compared to the state of the art, the accelerator improves performance by three orders of magnitude and improves power consumption by more than one order of magnitude. These gains are achieved through careful hardware/software co-design. We have modified conventional motion planning algorithms to aggressively precompute collision data, and we have implemented a microarchitecture that leverages the parallelism present in the problem.

 


Alternative Explicit Congestion Notification Backoff for TCP: Or how one small change makes the Internet Better (17 May, 2017)

Speaker: Gorry Fairhurst

Abstract:

Active Queue Management (AQM) with Explicit Congestion Notification  (ECN) has been deployed in cloud data centres to minimise the latency  and improve the near real-time deadlines for workflows such as  Partition/Aggregate tasks. The talk explores how ECN can also reduce  latency of transactional applications using the Internet. This leads to  a simple sender-side change to TCP, “Alternative Backoff with ECN”, and  how this can offer a compelling reason to deploy and enable ECN across  the Internet. It finally outlines the path to standarisation and how  future research can enable new applications.

Bio: 

Gorry Fairhurst is a Professor in the School of Engineering at the University of Aberdeen. His current research include performance evaluation and protocol design, Internet transport architecture, rural broadband access and satellite networking. He has 20 years experience working as an Internet Engineer, and is committed to open Internet standards and chairs the IETF’s Transport and Services Working Group (TSVWG).


OpenCL Just-In-Time Compilation for Dynamic Programming Languages (03 May, 2017)

Speaker: Michel Steuwer & Juan Fumero

Computer systems are increasingly featuring powerful parallel devices with the advent of many-core CPUs and GPUs. This offers the opportunity to solve computationally-intensive problems at a fraction of the time traditional CPUs need. However, exploiting heterogeneous hardware requires the use of low-level programming language approaches such as OpenCL, which is incredibly challenging, even for advanced programmers.

On the application side, interpreted dynamic languages are increasingly becoming popular in many domains due to their simplicity, expressiveness and flexibility. However, this creates a wide gap between the high-level abstractions offered to programmers and the low-level hardware-specific interface. Currently, programmers must rely on high performance libraries or they are forced to write parts of their application in a low-level language like OpenCL. Ideally, nonexpert programmers should be able to exploit heterogeneous hardware directly from their interpreted dynamic languages.

In this talk, we present a technique to transparently and automatically offload computations from interpreted dynamic languages to heterogeneous devices. Using just-in-time compilation, we automatically generate OpenCL code at runtime which is specialized to the actual observed data types using profiling information. We demonstrate our technique using R, which is a popular interpreted dynamic language predominately used in big data analytic. Our experimental results show the execution on a GPU yields speedups of over 150x compared to the sequential R implementation and the obtained performance is competitive with manually written GPU code. We also show that when taking into account start-up time, large speedups are achievable, even when the applications run for as little as a few seconds.


Simulating Variance in Socio-Technical Behaviours using Executable Workflow Fuzzing (02 May, 2017)

Speaker: Tim Storer

Socio-technical systems model the structure and interactions of
heterogeneous collections of actors, including human operators,
technical artefacts and organisations.  Such systems are characterised
by the interactions of actors at different scales of activity and behave
according to a complex interplay of factors, including formally defined
business processes, legal or regulatory standards, technological
evolution, organisational culture or norms and interpersonal
relationships and responsibilities.  The modelling and engineering of
such systems is still very much a a craft, requiring repeated trial,
error and subsequent revision.  Application of conventional systems
modelling methods is difficult, because socio-technical systems are not
readily disposed to functional decomposition, as the complex
interactions between components makes a separation of concerns
difficult.   As a consequence, existing techniques result in models that
either lack sufficient detail to capture the  effect of subtle
contingencies; are too narrow to make useful assessments about the
larger system; are unable to
 capture evolution in behaviours; or are so complex that analysis and
interpretation becomes intractable.

In this work, I will present a novel method for modelling
socio-technical systems that substantially reduces the difficulty of
simulating complex contingent behaviours.  In our approach, informal,
contingent behaviours are modelled as aspects that can be applied
obliviously to alter actor behaviour described in idealised workflows.
The aspects apply code fuzzers to the workflow descriptions, adjusting
the flow of execution of a workflow and representing the variability
that can occur in real life systems.  I will present a proof of concept
tool, Fuzzi Moss, and evaluate the approach using a case study of
software development workflows.


ePython: An implementation of Python for the micro-core Epiphany co-processor (26 April, 2017)

Speaker: Nick Brown

The Epiphany is a many-core, low power, low on-chip memory co-processor typical of a number of innovative micro-core architectures. The very low power nature of these architectures means that there is potential for their use in future HPC machines, and their low cost makes them ideal for HPC education & prototyping. However there is a high barrier to entry in programming due to the associated complexities and immaturity of supporting tools.

I will talk about ePython, a subset of Python for the Epiphany. Due to the idiosyncrasies of the hardware we have developed a new Python interpreter and this, combined with additional support for parallelism, has meant that novices can take advantage of Python to very quickly write parallel codes on the Epiphany and easily prototype their codes. In addition to running codes directly we have developed support for decorating kernels in existing Python codes and for these to be seamlessly offloaded, via ePython, to the Epiphany. I will discuss a prototype machine learning code for detecting lung cancer in 3D CT scans, where our decorators are used to offload the neural network onto the Epiphany in order to evaluate whether this technology is appropriate for these sorts of codes and what sort of performance once can expect.


Analytic Hierarchy Process Objective Function (25 April, 2017)

Speaker: Walaa Alayed

The IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) rely on the use of external Objective Functions for selecting the best path, where the majority of OFs are based on a single routing metric. In this talk I’ll be presenting an Analytic Hierarchy Process Objective Function (AHP-OF) inspired by multi-criteria decision making techniques.  The idea of AHP-OF is to combine several routing metrics by using the Analytic Hierarchy Process (AHP) technique to provide a better neighbour selection compared to existing OFs. The motivation of designing AHP-OF is to satisfy the different application requirements for Low Power and Lossy Networks (LLN) such as reliable, real time and highly available applications.


Walk this Way (19 April, 2017)

Speaker: Prof. Des Higham

Many applications require us to summarize key properties of a large, complex network. I will focus on the task of quantifying the relative importance, or "centrality" of the network nodes. This task is routinely performed, for example, on networks arising in biology, security, social science and telecommunication. To derive suitable algorithms, the concept of a walk around the network has proved useful; through either the dynamics of random walks or the combinatorics of deterministic walks.

In this talk I will argue that some types of walk are less relevant than others. In particular, eliminating backtracking walks leads to new network centrality measures with attractive properties and, perhaps surprisingly, reduced computational cost. Defining, analysing and implementing these new methods combines ideas from graph theory, matrix polynomial theory and sparse matrix computations.


Scalable Computing Beyond the Cloud (29 March, 2017)

Speaker: Blesson Varghese

It is forecast that over 50 billion devices will be added to the Internet by 2020. Consequently, 50 trillion gigabytes of data will be generated. Currently, applications generating data on user devices, such as smartphones, tablets and wearables use the cloud as a centralised server. This will soon become an untenable computing model. The way forward is to decentralise computations away from the cloud towards the edge of the network closer to the user. In my talk, I will present challenges, my current research and vision to harness computing capabilities at the edge of the network. More information is available at www.blessonv.com.


Generative Programming and Product Family Engineering with WizardsWorkbench (21 March, 2017)

Speaker: Niall Barr

Language Workbenches are tools used to support the creation and use of Domain Specific Languages (DSLs), frequently for the purpose of supporting Language Oriented Programming (LOP) or Generative Programming. LOP is an approach to application development where a language that is close to the problem domain is created, and the application is developed in this language. Generative programming is the related approach where a language at a high level of abstraction is used, and source code in a more general purpose language is generated from that code. In this talk I will describe the approach to web application development using generative programming that I have been using and evolving over several years, and my simple language workbench, WizardsWorkbench. As these web applications tend to follow a fairly similar pattern, DSLs are reused and evolved as required, my approach can be considered to be a form of Product Family Engineering that utilises generative programming. I will also describe the example driven approach which is used with WizardsWorkbench to develop both the parsers and the code generation output templates as well as the two DSLs used internally by WizardsWorkbench for parsers and templates.


Programmable Address Spaces (15 March, 2017)

Speaker: Paul Keir

In the last decade, high-performance computing has made increasing use of heterogeneous many-core parallelism. Typically the individual processor cores within such a system are radically simpler than their predecessors; and an increased portion of the challenge in executing relevant programs efficiently is reassigned. Tasks, previously the responsibility of hardware, are now delegated to software. Fast, on-chip memory, will primarily be exposed within a series of trivially distinct programming languages, through a handful of address spaces annotations, which associate discrete sections of memory with pointers; or similar low-level abstractions. Traditional CPUs would provide a hardware data cache for such functionality. Our work aims to improve the programmability of address spaces by exposing new functionality within the existing template metaprogramming system of C++


GPU Concurrency: The Wild West of Programming (08 March, 2017)

Speaker: Tyler Sorensen

GPUs are co-processors originally designed to accelerate graphics computations. However, their high bandwidth and low energy consumption have led to general purpose applications running on GPUs. To remain relevant in the fast-changing landscape of GPU frameworks, GPU programming models are often vague or underspecified. Because of this, several programming constructs have been developed which violate the official programming models, yet execute successfully on a specific GPU chip, enabling more diverse applications to be written for that specific device. During my PhD, we have examined one such construct: a global synchronisation barrier (or GSB). In this talk, we will address three key questions around this rogue programming construct: (1) Is it *possible* to write a portable GSB that successfully executes on a wide range of today's GPUs? (2) Can a GSB be *useful* for accelerating applications on GPUs? And (3) can a programming model that allows a GSB be *sustainable* for future GPU frameworks? Our hope is that this investigation will help the GSB find a permanent home in GPU programming models, enabling developers to exciting new applications in a safe and portable way.

Short Bio: Tyler’s research interests are in developing and understanding models for testing and safely developing GPU applications which contain irregular computations. In particular, he examines issues related to the GPU relaxed memory model and execution model. He received his MSc from University of Utah in 2014 and worked as an intern for the Nvidia compiler team during the summers of 2013 and 2014.


A Framework for Virtualized Security (07 March, 2017)

Speaker: Abeer Ali

Traditional network security systems consist of deploying high-performance and high-cost appliances (middleboxes) in fixed locations of the physical infrastructure to process traffic to prevent, detect or mitigate attacks.  This limits their provisioning abilities to a static specification, hindering extensible functionality and resulting in vendor lock-in.Virtualizing security function avoids these problems and increases the efficiency of the system. In this talk, we present the requirements and challenges of building a framework to deploy and manage virtualized security functions in a multitenant virtualized infrastructure like Cloud and how we can exploit latest advances in Network Function Virtualization (NFV) and network services offered by Software-Defined Networking (SDN) to implement it.


Type-Driven Development of Communicating Systems using Idris (01 March, 2017)

Speaker: Dr. Jan de Muijnck-Hughes

Communicating protocols are a cornerstone of modern system design. However, there is a disconnect between the different tooling used to design, implement and reason about these protocols and their implementations. Session Types are a typing discipline that help resolve this difference by allowing protocol specifications to be used during type-checking to ensure that implementations adhere to a given specification.

Idris is a general purpose programming language that supports full-dependent types, providing programmers with the ability to reason more precisely about programs. This talk introduces =Sessions=, our implementation of Session Types in Idris, and demonstrates =Sessions= ability to design and realise several common protocols.

=Sessions= improves upon existing Session Type implementations by introducing value dependencies between messages and fine-grained channel management during protocol design and implementation. We also use Idris' support for EDSL construction to allow for protocols to be designed and reasoned about in the same language as their implementation. Thereby allowing for an intrinsic bond to be introduced between a protocol's implementation and specification, and also with its verification.

Using =Sessions=, we can reduce the existing disconnect between the tooling used for protocol design, implementation, and verification.


Next Generation Cyber-physical systems (22 February, 2017)

Speaker: Dr Steven J Johnston

Cyber-physical systems (CPS) have peaked in the hype curve and have demonstrated they are here to stay in one form or another. Many cities have attempted to retrofit 'smart' capabilities and there is no shortage of disconnected, often proprietary CPS addressing city infrastructure.

In the same way that online activity evolved from simplistic webpages to feature rich web 2.0, CPS also need to evolve. What will the Smart City 2.0 of tomorrow will look like, how will the architectures will evolve and most importantly how does this address the key challenges of cities; energy, environment and citizens. (Audience interaction welcomed)


Get Your Feet Wet With SDN in a HARMLE$$ Way (21 February, 2017)

Speaker: Levente Csikor

Software-Defined Networking (SDN) offers a new way to operate, manage, and deploy communication networks and to overcome many of the long-standing problems of legacy networking. However, widespread SDN adoption has not occurred yet, due to the lack of a viable incremental deployment path and the relatively immature present state of SDN-capable devices on the market. While continuously evolving software switches may alleviate the operational issues of commercial hardware-based SDN offerings, lagging standards-compliance, performance regressions, and poor scaling, they fail to match the cost-efficiency and port density. In this paper, we propose HARMLESS, a new SDN switch design that seamlessly adds SDN capability to legacy network gear, by emulating the OpenFlow switch OS in a separate software switch component. This way, HARMLESS enables a quick and easy leap into SDN, combining the rapid innovation and upgrade cycles of software switches with the port density and cost-efficiency of hardware-based appliances into a fully dataplane-transparent and vendor-neutral solution. HARMLESS incurs an order of magnitude smaller initial expenditure for an SDN deployment than existing turnkey vendor SDN solutions while, at the same time, yields matching, or even better, data plane performance.


Network-layer QoE-Fairness for Encrypted Adaptive Video Streams (15 February, 2017)

Speaker: Dr Marwan Fayed

Netflix, YouTube, iPlayer, are increasingly targets of the following complaint: "How come my child gets HD streams on her phone, while I'm stuck with terrible quality on my 50 inch TV?" Recent studies observe that competing adaptive video streams generate flows that lead to instability, under-utilization, and unfairness behind bottleneck links. Additional measurements suggest there may also be a negative impact on users' perceived quality of experience as a consequence. Intuitively, application-generated issues should be resolved at the application layer. In this presentation I shall demonstrate that fairness, by any definition, can only be solved in the network. Moreover, that in an increasingly HTTP-S world, some form of client interaction is required. In support, a new network-layer 'QoE-fairness' metric will be be introduced that reflects user experience. Experiments using our open-source implementation in the home environment reinforce the network-layer as the right place to attack the general problem.

Refs

[1] http://dl.ifip.org/db/conf/networking/networking2015/1570066341.pdf

[2] https://dl.acm.org/citation.cfm?id=2940144

Bio: Marwan Fayed received his MA from Boston University and his PhD from the University of Ottawa, in 2003 and 2009 respectively, and in between worked at Microsoft as a member of the Core Reliability Group. He joined the faculty at the University of Stirling, UK in 2009 as under the Scottish Informatics and Computer Science Alliance (SICSA) scheme. He recently held the appointment of 'Theme Leader' for networking research in Scotland. His current research interests lie in wireless algorithms, as well as general network, transport, and measurement in next generation edge networks. He is a co-founder of HUBS c.i.c., an ISP focussed on rural communities; recipient of an IEEE CCECE best paper award; and serves on committees at IEEE and ACM conferences.


The Last of the Big Ones: Crazy Stone, AlphaGo, and Master (14 February, 2017)

Speaker: John O'Donnell

The computer program AlphaGo made history in 2016 by defeating Lee Sedol, one of the top professional go players, in a five game match.  A few weeks ago, an updated version of AlphaGo played 60 games against professionals and won them all.  The current generation of strong go programs use neural networks and Monte Carlo tree search.  These programs have a distinctive playing style and occasionally make astonishing moves, raising questions that are presently the focus of intensive research.  This talk will explore some of these issues, and illustrate them with incidents from the history of go as well as from the recent games by computers.


Inference-Based Automated Probabilistic Programming in Distributed Embedded Node Networks (08 February, 2017)

Speaker: Dr. Mark Post

Driven by ever more demanding applications, modern embedded computing and automation systems have reached unprecedented levels of complexity. Dr. Post’s research focuses on applying novel software and hardware architectures to simplify and distribute the structure of robots and other embedded systems, to make them robust and able to operate under uncertainty, and also allow to for more efficient and automated development processes. One way to achieve this is via the unification of programming and data, made possible by using probabilistic abstractions of exact data. In a new methodology for embedded programming developed through this research, exact variables are replaced with random variables and a computation process is defined based on evidence theory and probabilistic inference. This has many advantages including the implicit handling of uncertainty, a guarantee of deterministic program execution, and the ability to apply both statistical on-line learning and expert knowledge from relational semantic sources. Implementation on real-time systems is made reliable and practical by applying modular and lock-free inter-process communication, semantic introspection and stochastic characterization of processes to build robust embedded networks based on wide-computing concepts. This methodology in general has a vast array of potential real-world applications, and some aspects have been applied successfully to embedded programming of planetary rovers and agricultural robots.


The Problem of Validation in Systems Engineering (07 February, 2017)

Speaker: Robbie Simpson

Systems Engineering makes extensive use of modelling and analysis methodologies to design and analyse systems. However, it is rare for these methodologies to be effectively validated for correctness or utility. Additionally, the common use of case studies as an implicit validation mechanism is undermined by the lack of validation of these case studies themselves. This talk explores the problem of validation with specific reference to requirements engineering and safety analysis techniques, identifies the main shortcomings and attempts to propose some potential solutions.


intra-systems: TBA (07 February, 2017)

Speaker: Robbie Simpson


Research On Network Intrusion Detection Systems and Beyond (06 February, 2017)

Speaker: Dr Kostas Kyriakopoulos

The talk will go through the overview of research conducted in the "Signal Processing and Networks" group at Loughborough University, with a strong emphasis on the “Networks" side. We have developed algorithms for fusing cross layer measurements using the Dempster Shafer evidence framework to make decisions on whether packets/frames in the network are coming from a malicious source or from the legitimate Access Point. We are currently researching on how to infuse this system with contextual information besides the direct measurements from the network. The talk will also discuss other Networks relevant topics, including Ontologies for management of networks and some brief introduction to the group’s Signal Processing expertise in Signal Processing for defence areas.


Pycket: A Tracing JIT for a functional language (01 February, 2017)

Speaker: Sam Tobin-Hochstadt

Functional languages have traditionally had sophisticated ahead-of-time compilers such as GHC for Haskell, MLton for ML, and Gambit for Scheme. But other modern languages often use JIT compilers, such as Java, Smalltalk, Lua, or JavaScript. Can we apply JIT compilers, in particular the technology of so-called tracing JIT compilers, to functional languages? I will present a new implementation of Racket, called Pycket, which shows that this is both possible and effective. Pycket is very fast on a wide range of benchmarks, supports most of Racket, and even addresses the overhead of gradual typing-generated proxies.

Biography: Sam Tobin-Hochstadt is an Assistant Professor in the School of Informatics and Computing at Indiana University. He has worked on dynamic languages, type systems, module systems, and metaprogramming, including creating the Typed Racket system and popularizing the phrase "scripts to programs." He is a member of the ECMA TC39 working group responsible for standardizing JavaScript, where he co-designed the module system for ES6, the next version of JavaScript. He received his PhD in 2010 from Northeastern University under Matthias Felleisen.


Intra-Systems Seminar (24 January, 2017)

Speaker: Jeremy Singer

Jermey presents an analysis of beginner Haskell code.


Exploiting Memory-Level Parallelism (18 January, 2017)

Speaker: Dr Timothy M Jones

Many modern data processing and HPC workloads are heavily memory-latency bound. Current architectures and compilers perform poorly on these applications due to the highly irregular nature of the memory access patterns involved. This leads to CPU stalling for the majority of the time. However, on closer inspection, these applications contain abundant memory-level parallelism that is currently unexploited. Data accesses are, in many cases, well defined and predictable in advance, falling into a small set of simple patterns. To exploit them though, we require new methods for prefetching, in hardware and software.

In this talk I will describe some of the work my group has been doing in this area over the past couple of years. First, I'll show a compiler pass to automatically generate software prefetches for indirect memory accesses, a special class of irregular memory accesses often seen in high-performance workloads. Next, I'll describe a dedicated hardware prefetcher that optimises breadth-first traversals of large graphs. Finally, I'll present a generic programmable prefetcher that embeds an array of small microcontroller-sized cores next to the L1 cache in a high-performance processor. Using an event-based programming model, programmers are able to realise performance increases of over 4x by manual creation of prefetch code, or 3.5x for the same application using an automatic compiler pass.


SpaceTime - A fresh view on Parallel Programming (14 December, 2016)

Speaker: Prof Sven-Bodo Scholz

Traditionally, programs are specified in terms of data structures and successive modifications of these. This separation dictates at what time which piece of data is located in what space, be it main memory, disc or registers. When aiming at high-performance, parallel executions of programs, it turns out that the choice of this time / space separation can have a vast impact on the performance that can be achieved. Consequently, a lot of work has been spent on compiler technology for identifying dependencies between data and on techniques for rearranging codes for improved locality with respect to both, time and space. As it turns out, the programmer specified choice of data-structures often limits what can be achieved by such optimisation techniques. In this talk, we argue that a new way of formulating parallel programs that is based on a unified view on space and time not only matches much better typical scientific specifications, it also increases the re-usability of programs and, most importantly, it enables more radical space-time optimisations through compilers.


Reviewing the Systems Curriculum Review (13 December, 2016)

Speaker: Colin Perkins

Over the last few months, the Section has been engaged in a review of our undergraduate curriculum and teaching. This talk will outline the changes we’re proposing, and what we hope to achieve by doing so


Knights Landing, MCDRAM, and NVRAM: The changing face of HPC technology (07 December, 2016)

Speaker: Mr Adrian Jackson

The hardware used in HPC systems is becoming much more diverse than we have been used to in recent times. Intel's latest Xeon Phi processor, the Knights Landing (KNL), is one example of such change, however bigger changes in memory technologies and hierarchies are on the way. In this talk I will outline our experiences with the KNL, how future memory technologies are likely to impact the hardware in HPC systems, and what these changes might mean for users.


Performance Evaluation for CloudSim - Cloud Computing Simulator (06 December, 2016)

Speaker: Dhahi Alshammari

Much cloud computing research is performed using simulators. There are many simulators available. One of the most common simulators is "CloudSim", which is widely used as a cloud research tool.  This talk will review briefly the CloudSim system and its various extensions. The extensions provide additional usability features and improved simulation fidelity. I will further present results of an empirical study to evaluate the precision of CloudSim by comparing it with actual test-bed results from the Glasgow Raspberry Pi Cloud infrastructure


Erlyberly - Erlang tracing for the masses (30 November, 2016)

Speaker: Mr Andy Till

The BEAM virtual machine has flexible and powerful tooling for introspection, statistics and debugging without affecting the running application. Erlyberly is an ongoing project to lower the barrier for entry for using these capabilities, focusing on tracing.


Raspberry Pi based sensor platform for a smart campus (29 November, 2016)

Speaker: Dejice Jacob

In a sensor network, using sensor nodes with significant compute 
capability can enable flexible data collection, processing and reaction. This
can be done using commodity single-board computers. In this talk, we will be
describing initial deployment, software architecture and some preliminary analysis.


Data Structures as Closures (23 November, 2016)

Speaker: Prof Greg Michaelson

In formalising denotational semantics, Strachey introduced a higher order update function for the modelling of stores, states and environments. This function relies solely on atomic equality types, lambda abstractions and conditions to represent stack disciplined association sequences as structured closures, without recourse to data structure constructs like lists.

Here, we present higher order functions that structure closures to model queue, linear ordered and tree disciplined look up functions, again built from moderately sugared pure lambda functions. We also discuss their type properties and practical implementation.


intra-systems: TBA (22 November, 2016)

Speaker: John O'Donnell


Automatic detection of parallel code: dependencies and beyond (16 November, 2016)

Speaker: Mr Stan Manilov

Automatic parallelisation is an old research topic, but unfortunately, it
has always been over-promising and under-performing. In this talk, we'll
look at the main approaches towards automatically detecting parallelism in
legacy sequentialcode and we'll follow with some fresh ideas we're working
on, aiming to bring us beyond the ubiquitous dependence analysis.


Device Comfort for Information Accessibility (15 November, 2016)

Speaker: Tosan Atele-Williams

Device comfort is an augmented notion of trust that embodies a relationship between a device, its owner and the environment, with the device able to act, advice, encourage, and reason about everyday interactions, including a minutely precise comprehension of information management and personal security of device owner. The growing privacy and security needs in an increasingly intuitive, interactive and interconnected society contends with Device Comfort as information security methodology based on trust reasoning. In this paper an information accessibility architecture based on java security sandbox that uses device comfort methodology is presented, a further look at how information can be classified based on trust ratings and sensitivity, and how everything within this definition is confined to trusted zones or dimensions. 


Dynamically Estimating Mean Task Runtimes (08 November, 2016)

Speaker: Patrick Maier

The AJITPar project aims to automatically tune skeleton-based parallel
programs such that the task granularity falls within a range that
promises decent performance: Tasks should run long enough to amortise
scheduling overheads, but not too long.

In this talk, I will sketch how AJITPar uses dynamic cost models to
accurately estimate mean task runtimes, despite irregular task sizes.
The key is random scheduling and robust linear regression.

(Joint work with Magnus Morton and Phil Trinder.)


Image processing on FPGAs with a DSL and dataflow transformations (02 November, 2016)

Speaker: Dr Rob Stewart

FPGAs are chips that can be reconfigured to exactly match the structure
of a specific algorithm. They are faster than CPUs and need less power
than GPUs, and hence are well suited for remote image processing needs.
They are however notoriously difficult to program, which is often done
by hardware experts working at a very low level. This excludes algorithm
designers across a range of real world domains from exploiting FPGA
technology. Moreover, time and space optimisation opportunities found in
compilers of high level languages cannot be applied to low level
hardware descriptions.

This talk will be in three parts. 1) I will present RIPL, our image
processing FPGA DSL. It comprises algorithmic skeletons influenced by
stream combinator languages, meaning the RIPL compiler is able to
generate space efficient hardware. 2) I will demonstrate our compiler
based dataflow transformations framework, which optimises the dataflow
IR form of RIPL programs before they are synthesised to FPGAs. 3) I will
describe the FPGA based smart camera architecture that RIPL programs
slot into, which is used for evaluation.


Scaling robots and other stuff with Erlang (01 November, 2016)

Speaker: Natalia Chechina

I’m going to give this talk at the end of November at BuildStuff developer conferences in Vilnius (Lithuania) and Kiev (Ukraine).  So it’s a bit skewed towards developer community rather than research community.  Any feedback will be very much appreciated.

 

I’ll talk about scalability and fault tolerance features of distributed Erlang. In particular, what makes it so good for large scale distributed applications on commodity hardware, where devices are inherently non-reliable and can disappear and re-appear at any moment.

 

The talk is based on experience of developing Scalable Distributed Erlang (SD Erlang -- a small extension of distributed Erlang for distributed scalability) and integrating Erlang in robotics. So, I’ll share rationale behind design decisions for SD Erlang, lessons learned, advantages, limitations, and plans for the further development. And then talk about benefits of Erlang in distributed robotics, initial findings, and plans.


The Missing Link! A new Skeleton for Evolutionary Multi-Agent Systems in Erlang (26 October, 2016)

Speaker: Prof Kevin Hammond

Evolutionary multi-agent systems (EMAS) play a critical role in many artificial intelligence applications that are in use today. This talk will describe a new parallel pattern for parallel EMAS computations, and its associated skeleton implementation, written in Erlang using the Skel library. The skeleton enables us to flexibly capture a wide variety of concrete evolutionary computations that can exploit the same underlying parallel implementation. The use of the skeleton is shown on two different evolutionary computing applications: i) computing the minimum of the Rastrigin function; and ii) solving an urban traffic optimization problem. We can obtain very good speedups (up to 142.44× the sequential performance) on a variety of different parallel hardware from Raspberry Pis to large-scale multicores and Xeon Phi accelerators, while requiring very little parallelisation effort.


Power, Precision and EPCC (25 October, 2016)

Speaker: Blair Archibald

 I have recently returned from a summer working at EPCC, one of the
  largest high performance computing (HPC) centres in the UK. In this
  talk I'll give give a whirlwind tour of what I got up to during my
  time there!

  I'll start by describing EPCC itself and how it fits into the wider
  HPC community. Then will dive into two of the projects I was involved
  in over summer.

  Firstly, the Adept project which tackles the challenges presented by
  the need for energy efficient computing. This project relies heavily
  on custom hardware to gain fine grain knowledge of power usage. We
  will see how at how energy scales with parallel efficiency, the
  potential hidden cost of programming languages, and some interesting
  future research directions.

  Next, the ExaFLOW project aimed at providing the next generation of
  computational fluid dynamics codes (ready for the "Exa-scale" era).
  We will dive into mixed precision analysis and discover how we can
  analyse floating-point behaviour of scientific codes by way of binary
  instrumentation.


From Robotic Ecologies to Internet of Robotic Things: Artificial Intelligence and Software Engineering Issues (19 October, 2016)

Speaker: Dr Mauro Dragone

Building smart spaces combining IoT technology and robotic
capabilities is an important and extended challenge for EU R&D&I, and a key
enabler for a range of advanced applications, such as home automation,
manufacturing, and ambient assisted living (AAL). In my talk I will provide an
overview of robotic ecologies, i.e. systems made up of sensors, actuators and
(mobile) robots that cooperate to accomplish complex tasks. I will discuss the
Robotic Ecology vision and highlight how it shares many similarities with the
Internet of Things (IoT): The ideal aim on both fronts is that arbitrary
combinations of devices should be able to be deployed in everyday environments,
and there efficiently provide useful services. However, while this has the
potential to deliver a range of disruptive services and address some of the
limitations of current IoT efforts, their effective realization necessitates
both novel software engineering solutions and artificial intelligence methods
to simplify their large scale application in real world settings. I will
illustrate these issues by focusing on the results of the EU project RUBICON
(fp7rubicon.eu). RUBICON built robotic ecologies that can learn to adapt to
changing and evolving requirements with minimum supervision. The RUBICON
approach builds upon a unique combination of methods from cognitive robotics,
machine learning, wireless sensor networks and software engineering. I will
summarise the lessons learned by adopting such an approach and outline
promising directions for future developments.

 

Biography:

Mauro Dragone is Assistant Professor with the Research Institute of Signals,
Sensors and Systems (ISSS), School of Engineering & Physical Sciences at
Heriot-Watt University, Edinburgh Centre for Robotics. Dr. Dragone gained more
than 12 years of experience as a software architect and project manager in the
software industry before his involvement with academia. His research expertise
includes robotics, human-robot interaction, wireless sensor networks and
software engineering. Dr. Dragone was involved in a number of EU projects
investigating Internet of Things (IoT) and intelligent control solutions for
smart environments, before initiating and leading the EU project RUBICON
(fp7rubicon.eu).


Data Plane Programmability for Software Defined Networks (18 October, 2016)

Speaker: Simon Jouet

OpenFlow has established itself as the defacto standard for Software Defined Networking (SDN) by separating the network's control and data planes. In this approach a central controller can alter the match-action pipeline of the individual switches using a limited set of fields and actions preventing. This inherent rigidity prevents the rapid introduction of new data plane functionality that would enable the design of new forwarding logic and other packet processing such as custom routing, telemetry, debugging, security, and quality of service.

In this talk I will present BPFabric a platform, protocol, and language-independent architecture to centrally program and monitor the data plane. It will cover the design of the switches and how they defer from "legacy" or OpenFlow switches and the design of a control API to orchestrate the infrastructure.

 


Turbocharging Rack-Scale In-Memory Computing with Scale-Out NUMA (12 October, 2016)

Speaker: Dr Boris Grot

Web-scale online services mandate fast access to massive quantities of
data. In practice, this is accomplished by sharding the datasets across a
pool of servers within a datacenter and keeping each shard within a
server's main memory to avoid long-latency disk I/O. Accesses to non-local
shards take place over the datacenter network, incurring communication
delays that are 20-1000x greater than accesses to local memory. In this
talk, I will introduce Scale-Out NUMA -- a rack-scale architecture with an
RDMA-inspired programming model that eliminates chief latency overheads of
existing networking technologies and reduces the remote memory access
latency to a small factor of local DRAM. I will overview key features of
Scale-Out NUMA and will describe how it can bridge the semantic gap
between software and hardware through integrated support for atomic object
reads.

Bio:

Boris Grot is a Lecturer in the School of Informatics at the University of
Edinburgh. His research seeks to address efficiency bottlenecks and
capability shortcomings of processing platforms for big data. His recent
accomplishments include an IEEE Micro Top Pick and a Google Faculty
Research Award. Grot received his PhD in Computer Science from The
University of Texas at Austin and spent two years as a post-doctoral
fellow at the Parallel Systems Architecture Lab at EPFL.


Full Section Meeting and Strategic Discussion (11 October, 2016)

Speaker: Phil Trinder

This session is essential for all members of the Systems Section. We will

  • Meet new PhD students in the section
  • Discuss progress since the Away Day
  • Discuss strategic plans, including:
    • A Centre for Doctoral Training (CDT) proposal
    • A high-profile Section Workshop as part of the School’s 60th anniversary celebrations

 Feel free to propose other topics by email to Phil.Trinder@glasgow.ac.uk

 


Towards Reliable and Scalable Robot Communication (10 October, 2016)

Speaker: Phil Trinder

The Robot Operating System (ROS) is the de facto standard middleware

for modern robots. However, communication between 

ROS nodes has scalability and reliability issues in practice. This talk reports 

an  investigation into whether Erlang’s lightweight concurrency

and reliability mechanisms have the potential to address these issues.

The basis of the investigation is a pair of simple but typical

robotic control applications, namely two face-trackers: one using

ROS publish/subscribe messaging, and the other a bespoke Erlang

communication framework.

 

The talk reports experiments that compare five key aspects of the ROS

and Erlang face trackers. We find that Erlang communication scales

better, supporting at least 3.5 times more active processes (700 processes)

than its ROS-based counterpart (200 nodes) while consuming

half of the memory. However, while both face tracking prototypes

exhibit similar detection accuracy and transmission latencies

with 10 or fewer workers, Erlang exhibits a continuous increase in

the total time taken to process a frame as more agents are added,

which we have identified is caused by function calls from Erlang

processes to Python modules via ErlPort. A reliability study shows

that while both ROS and Erlang restart failed computations, the

Erlang processes restart 1000–1500 times faster than ROS nodes,

reducing robot component downtime and mitigating the impact of

the failures.

 

Joint work with Andreea Lutac, Natalia Chechina, and Gerardo Aragon-Camarasa

 


Teach You a Haskell Course (20 September, 2016)

Speaker: Jeremy Singer

This week, our Functional Programming in Haskell course began. We have around 4000 learners signed up for this massive open online course. Wim and I have spent the past six months developing the learning materials, mostly adapted from the traditional Functional Programming 4 course.

In this talk, I will give an overview of the challenges involved in setting up and running an online course. In short, hard work but very rewarding!

https://www.futurelearn.com/courses/functional-programming-haskell/1


Systems: Climate is what you expect, the weather is what you get! (13 September, 2016)

Speaker: Professor Saji Hameed

This old saying among weather forecasters is correct. Yet it does give little insight into the workings of the climate system.  While weather can be understood and simulated as instabilities arising within  the atmosphere,  climate involves interactions and exchanges of properties among a wide variety of subsystems that include for example the atmosphere, the ocean and land subsystems. I will first discuss an example of these interactions at play showcasing the El Nino phenomenon.  In the rest of the talk, I will endeavor to describe how software for climate models integrates experiences and expertise  across a wide range of disciplines and the computational challenges faced by the climate modeling community in doing so.

Biography: Professor Hameed is a Senior Associate Professor at the University of Aizu in Fukushima, Japan. He was the Director of Science at APCC, Korea and has been appointed a Senior Visiting Scientist at Japan Agency for Marine Earth Science and Technology (JAMSTECH).

He is credited with the discovery of an ocean-atmosphere coupled mode "Indian Ocean Dipole" which radically changed the prevailing paradigms. At APCC he pioneered an information technology based approach for generating and distributing climate information for societal benefit. He has also worked with APCC and its international partners to develop a climate prediction based approach to managing severe haze and forest fires in Southeast Asia, a severe environmental pollution issue in the area. He is also closely working with scientists at the National Institute of Advanced Science and Technology (Japan) to apply climate and weather science for renewable energy applications.

His current work includes investigating Super El Nino using computational modeling approaches, analyzing climate data using machine learning algorithms, tracking clouds and rain with low cost GPS chips, and continuing investigation into Indian Ocean Dipole that affects global climate.


Systems Seminar: Sorting Sheep from Goats - Automatically Clustering and Classifying Program Failures (07 September, 2016)

Speaker: Marc Roper

In recent years, software testing research has produced notable advances in the area of automated test data generation. It is now possible to take an arbitrary system and automatically generate volumes of high-quality test data. But the problem of checking the correctness or otherwise of the outputs (termed the "oracle problem") still remains.
This talk examines how machine learning techniques can be used to cluster and classify test outputs to separate failing and passing cases.
The feasibility of the approach is demonstrated and shown to have the potential to reduce by an order of magnitude the numbers of outputs that need to be examined following a test run.

This is joint work carried out with Rafig Almaghairbe


Biography: Dr Marc Roper is a Reader in the Department of Computer and Information Sciences at the University of Strathclyde. He has an extensive background in software engineering, particularly in understanding and addressing the problems associated with designing, testing, and evolving large software systems. Much of his research has incorporated significant empirical investigations: either based around controlled participant-based experiments or through the the analysis of open-source systems and large-scale repositories. His more recent work has explored the application of search-based strategies and machine learning techniques to software engineering problems such as test data generation, the identification of security anomalies, and automatic fault detection. His current interests lie in the area of software analytics, in particular building models of software systems behaviour to automatically identify and locate faults.