Analytics for Digital Earth - Workshop 4
Date: 28th June 2023
Time: 9.30am
Location: ARC/Online
Speakers: The 4th Analytics for Digital Earth workshop focused on how to design digital twins, looking at how digital twins have been built in other areas and what lessons can be learnt from them when designing environmental digital twins. The first speaker was Prof Alison Heppenstall (University of Glasgow) who spoke about whether urban digital twins were achievable, showing a number of examples. Next, we had Dr Kevin Wilson (Newcastle University) who told us about the CReDo digital twin and discussed how expert elicitation could be integrated into digital twins. Finally, Dr Vinny Davies (University of Glasgow) spoke about his digital twin of a mass spectrometer, ViMMS, talking about how it can be used in practise through an Application Programming Interface (API). Additionally, we had a number of lightning talks from researchers within the University of Glasgow, talking about digital twins for agri-environment, water, hearts, wind turbines, and plant biodiversity.
Discussion Format: There were two breakout discussion sessions, with the group split into two in-person groups and one online group for each session. The first session looked at what lessons could be learnt from some of the digital twins discussed in the lightning talks, and the second session address some different important challenges when creating digital twins.
Session 1: The discussion in session 1 focused on lessons we can learn from some of the digital twins we have seen, with discussion points including:
- Commonalities between digital twins
- Challenges faced by multiple digital twins
- What is the importance of the application when designing digital twins?
- Does anything stop the same approaches being used in environmental digital twins?
Discussion focus points:
Importance of Application – Important that all digital twins relate directly to the proposed application, but it was also discussed whether digital twins could be applied to multiple applications. Also, possible to use one digital twin to inform another, suggestion of multiple applications per digital twin.
Real time decision making – This can be challenging in terms of both computational performance and closed loop issues. Difficult to quantify how digital twins will react to external impacts, and also whether they will remain relevant once any policy changes have been implemented. Also, important to understand what real time means in the context of application specific digital twins as it will be different in different scenarios.
Uncertainty quantification – This is important both in terms of data, but also in terms of how policies will be implemented. In agent-based models for instance, there are massive challenges around whether the agents will follow the suggestions proposed by the modelled intervention.
Linking models – Linking data and models can be challenging. Especially the case when models or data have privacy or paywall type restrictions that prevent them being utilised by others to their full potential. There can also be the challenge that different models use different software and are designed with different levels of quality.
Data collection and linkage – Almost all digital twins face the challenge of trying to link different types of data to these models. Many digital twins do not have data in the detail level required for the application.
Data management – This is a big task that is often overlooked and must be accounted for in the design process.
Session 2: The discussion in session 2 focused around 4 key challenges that occur when creating digital twins, with specific challenges being:
- How do we incorporate different modelling structures into a digital twin?
- What are the challenges of building a digital twin on sparse data?
- How best to handle uncertainty when designing and implementing digital twins?
- How do you use digital twins to help with decision making and policy?
Discussion focus points:
How do we incorporate different modelling structures into a digital twin?
- Important to think about the different modelling time frames and how this would affect the modelling
- There are challenges around software and data tracing.
- Can we utilise methods such as federated learning to handle data privacy, or remove sensitive information at the edge?
- Experts are the only way to make this work. Challenges around how to collect this data in sufficient quantity and without bias.
- Can we transfer information between models? There are challenges around whether this would be representative for the local system and potential introduce a bias.
What are the challenges of building a digital twin on sparse data?
How best to handle uncertainty when designing and implementing digital twins?
- Uncertainty versus user – does giving users estimates with uncertainty undermine the credibility of the model? Does this lead to a closed loop affect where users ignore the model.
- Hard to encourage user engagement – How do we get the data suppliers to actively engage with the digital twin.
How do you use digital twins to help with decision making and policy?
- Need a large-scale modelling for the full national picture.
- How is it best to communicate with experts.
- How do we summarise a massive digital twin down to small bits of actionable information? How do we stop such a process making the digital twin somewhat pointless?
- Communication is key. Need to have clear dashboards, scenario planning, or similar
The way forward for the next workshops: Our next workshop will focus on energy, specifically renewable energy.