The current research projects the the Autonomy and Verification Network is involved in. Clicking the project’s logo or picture will take you to the project’s page or external website. This page also has information about our Previous Projects
Current projects
Centre for Robotic Autonomy in Demanding and Long-lasting Environments
CRADLE brings together the industrial experience that Jacobs have in applied Robotics and Autonomous Systems with the research expertise at the University of Manchester in this field, to create a collaborative research centre that is internationally leading and sustainable in the long-term.
The Centre for Robotic Autonomy in Demanding and Long-lasting Environments (CRADLE) Prosperity Partnership is funded by UKRI, the University of Manchester and Jacobs Ltd. Further details can be found on the CRADLE website.
Interlinked Futures
This project focuses on new advances in interconnected computing technology, to identify issues related to UK Critical National Infrastructure (CNI), from 2023 to 2040. It includes a Delphi Study and a Workshop.
The project builds on previous work with futurists forecasting the future implications of ubiquitous connectivity, artificial intelligence, mixed reality, low and no-code solutions, and digital ownership, where these challenge UK wellbeing and values. The timeframe we are considering is the next 15 years or so, specifically to 2040 and will identify specific ways in which the forecast changes may affect different aspects of the 13 CNI sectors and related national interests.
The research focusses on possible issues—technical, sociotechnical, and societal—and attempts to establish both possible Impacts and Likelihood for those issues.
Computational Agent Responsibility
In this multi-disciplinary project we aim to devise a framework for autonomous systems responsibility that is philosophically justifiable, effectively implementable, and practically verifiable. This paves the way for broader philosophical studies, the formal verification of system responsibility, sophisticated explanations and the use of responsibilities as a driver for agent decisions and actions.
The Computational Agent Responsibility project is part of the broader Trustworthy Autonomous Systems Programme. Further details can be found on the Computational Agent Responsibility website.
Trustworthy Autonomous Systems Verifiability Node
Manchester: UKRI-funded project
The Verifiability Node, part of the broader Trustworthy Autonomous Systems Programme, will develop novel rigorous techniques that automate the systematic and holistic verification of autonomous systems and provides a focal point for verification research in the area of autonomous systems, linking to both national and international initiatives. Our research will particularly address high-level decision-making concerning ethics, regulations, etc, through techniques across formal verification and runtime verification.
Further details can be found on the Verifiability Node website, or on twitter @tas_verif .
Previous Projects
Robotics and AI Hubs
The Autonomy and Verification Group is involved in three Robotics and Artificial Intelligence hubs. Each hub focusses on autonomous robotics in a different hazardous environment: the RAIN Hub in nuclear robotics, the ORCA Hub in robotics offshore and the FAIR-SPACE Hub in robots in space.
Each of these hubs is a project in itself, but our research cuts across all three hubs, so further details about our work can be found on our Robotics and AI Hubs page.
Reconfigurable Autonomy
The Reconfigurable Autonomy project is an EPSRC-funded collaboration between Computer Science researchers at the University of Liverpool, Autonomous Control Systems and Astronautics researchers at the University of Southampton and Spacecraft Autonomy researchers at Surrey Space centre. The projects aim to provide a rational agent architecture that controls autonomous decision-making, is re-usable and generic, and can be configured for many different autonomous platforms.
Further details can be found at the Reconfigurable Autonomy website.
Science of Sensor Systems Software
Science of Sensor Systems Software (S4) is an EPSRC programme grant for Glasgow University with Universities of St Andrews, Liverpool and Imperial College. The project aims to tackle the challenges for the development and deployment of verifiable, reliable, autonomous sensor systems that operate in uncertain, multiple and multi-scale environments. The S4 programme will develop a unifying science, across the breadth of mathematics, computer science and engineering, that will let developers engineer for the uncertainty and ensure that their systems and the information they provide is resilient, responsive, reliable, statistically sound and robust.
Further details can be found on the Science of Sensor Systems Software website, or on twitter @S4programme .
Trustworthy Robotic Assistants
The Trustworthy Robotic Assistants project links Bristol Robotics Laboratory, the University of Bristol, the University of Hertfordshire, the University of Liverpool and the University of the West of England, with several industrial partners. The project aims to tackle the challenges of safety verification for human-robot interaction; by combining formal verification, simulation-based testing, and formative user evaluation.
Further details can be found on the Trustworthy Robotic Assistants website.
Verification and Validation of Autonomous Systems
The Verification and Validation of Autonomous Systems Network links researchers from 34 universities from across the UK who work on the verification and validation of autonomous systems. The network aims to highlight case studies and challenges in this area of research, and to provide a roadmap for future research in this area. The network also provides a route for researchers to disseminate their work to industry, government, and the public.
Further details can be found on the Verification and Validation of Autonomous Systems website, or on twitter @vavasdotorg .
Verifiable Autonomy
The Verifiable Autonomy project is an EPSRC-funded collaboration between Computer Science researchers at the University of Liverpool, Roboticists at the University of the West of England and Control Scientists at the University of Sheffield.
The project aims to tackle the challenges of robotic systems that can make their own decisions, without direct human intervention. The lab’s work on this project involves extending existing program model-checking techniques for BDI-style agents, especially where this involves checking legal or ethical requirements.
Further details can be found on the Verifiable Autonomy website