The current research projects the the Autonomy and Verification Network is involved in. Clicking the project’s logo or picture will take you to the project’s page or external website. This page also has information about our Previous Projects
Robotics and AI Hubs
The Autonomy and Verification Group is involved in three Robotics and Artificial Intelligence hubs. Each hub focusses on autonomous robotics in a different hazardous environment: the RAIN Hub in nuclear robotics, the ORCA Hub in robotics offshore and the FAIR-SPACE Hub in robots in space.
Each of these hubs is a project in itself, but our research cuts across all three hubs, so further details about our work can be found on our Robotics and AI Hubs page.
Science of Sensor Systems Software
Science of Sensor Systems Software (S4) is an EPSRC programme grant for Glasgow University with Universities of St Andrews, Liverpool and Imperial College. The project aims to tackle the challenges for the development and deployment of verifiable, reliable, autonomous sensor systems that operate in uncertain, multiple and multi-scale environments. The S4 programme will develop a unifying science, across the breadth of mathematics, computer science and engineering, that will let developers engineer for the uncertainty and ensure that their systems and the information they provide is resilient, responsive, reliable, statistically sound and robust.
Verification and Validation of Autonomous Systems
The Verification and Validation of Autonomous Systems Network links researchers from 34 universities from across the UK who work on the verification and validation of autonomous systems. The network aims to highlight case studies and challenges in this area of research, and to provide a roadmap for future research in this area. The network also provides a route for researchers to disseminate their work to industry, government, and the public.
Further details can be found on the Verification and Validation of Autonomous Systems website, or on twitter @vavasdotorg .
Trustworthy Autonomous Systems Verifiability Node
Manchester: UKRI-funded project
The Verifiability Node, part of the broader Trustworthy Autonomous Systems Programme, will develop novel rigorous techniques that automate the systematic and holistic verification of autonomous systems and provides a focal point for verification research in the area of autonomous systems, linking to both national and international initiatives. Our research will particularly address high-level decision-making concerning ethics, regulations, etc, through techniques across formal verification and runtime verification.
The Verifiable Autonomy project is an EPSRC-funded collaboration between Computer Science researchers at the University of Liverpool, Roboticists at the University of the West of England and Control Scientists at the University of Sheffield.
The project aims to tackle the challenges of robotic systems that can make their own decisions, without direct human intervention. The lab’s work on this project involves extending existing program model-checking techniques for BDI-style agents, especially where this involves checking legal or ethical requirements.
Further details can be found on the Verifiable Autonomy website
The Reconfigurable Autonomy project is an EPSRC-funded collaboration between Computer Science researchers at the University of Liverpool, Autonomous Control Systems and Astronautics researchers at the University of Southampton and Spacecraft Autonomy researchers at Surrey Space centre. The projects aim to provide a rational agent architecture that controls autonomous decision-making, is re-usable and generic, and can be configured for many different autonomous platforms.
Further details can be found at the Reconfigurable Autonomy website.
Trustworthy Robotic Assistants
The Trustworthy Robotic Assistants project links Bristol Robotics Laboratory, the University of Bristol, the University of Hertfordshire, the University of Liverpool and the University of the West of England, with several industrial partners. The project aims to tackle the challenges of safety verification for human-robot interaction; by combining formal verification, simulation-based testing, and formative user evaluation.
Further details can be found on the Trustworthy Robotic Assistants website.