JIVE
Conference in Padua, Italy

The JIVE team is working on ethical aspects of human-AI collaboration, such as bias, privacy and transparency.

We are focusing on developing text rewriting techniques that normalise, de-bias and de-identify text to build responsible AI models that will protect sensitive information of individuals. This includes Reinforcement Learning (RL) methods applied to Large Language Models (LLMs). Such models could serve as digital twins to monitor and improve mental health of vulnerable individuals. We also work on transparency of our models conveying their decisions in collaborative protocols via reporting uncertainty and bias estimates. This includes Bayesian Deep Learning techniques. We also work with integration of LLMs into agents in language acquisition problems, educational and healthcare contexts. Beyond mental health, we also work with legal data.

Videos

Publications

News

crystal ball
June 13, 2024

What will we be saying about AI in ten years’ time?

Séminaire proposé par Julia Ive, Queen Mary University, invitée à ETIS en juin 2024.

Read more..
Picture of delegates at CogX event
May 9, 2024

Addressing Socio-technical Limitations of LLMs for Medical and Social Computing

Dr Julia Ive represents the Responsible Ai UK Keystone project on Addressing Socio-technical Limitations of LLMs for Medical and Social Computing (AdSoLve) led by Prof Maria Liakata at CogX Los Angeles.

Read more..
Photo by Joel Filipe on Unsplash
April 22, 2024

What will we be saying about AI in ten years’ time?

Dr Julia Ive and Professor Gianluca Sergi will be discussing what crisis points may have emerged recently and what AI governance structures might look like.

Read more..
blogs tailwind section
April 18, 2024

The promise of AI: working across disciplines for the public good

Dr Julia Ive will be presenting a seminar at Queen Mary University of London with David Leslie and Dr Isadora Cruxen.

Read more..
Mitigating bias poster
Mar 28, 2024

Mental Health Monitoring workshop

Julia and Vishal presented the poster at the AI for Mental Health Monitoring workshop at the Fringe Event of Alan Turing Institute.

Read more..
Mariia in front of the poster
21-22 March, 2024

Artificial Intelligence in Healthcare: Shaping the Future of Science (AI4H) Conference, University of Padua, Padova, Italy

Julia and Mariia present the poster for Mitigating Bias in Pediatric Mental Health Notes via rewriting.

Read more..
picture of gold blur
Mar 18, 2024

Regulating AI in Digital Mental Health Forum

Dr Julia Ive will be speaking at the Regulating AI in Digital Mental Health Forum, AI Turing Fringe Event.

Read more..

The Team

Image of Dr Julia Ive

Dr Julia Ive is the lab lead and an expert in guiding foundation models for text generation with Reinforcement Learning (RL). Her track record includes a major scientific breakthrough in the domain of generating synthetic mental health text. She pioneered this in 2018 as a result of her pilot project with her colleagues from Kings College London, Cambridge and Oxford. The methodology stemming from the project was published in the prestigious high-rank Nature Digital Medicine journal. At Queen Mary University of London (QMUL), she has been the module organiser of the Artificial Intelligence course at MSc level. She has taught Neural Networks and Natural Language Processing both at Imperial College London, with Prof Lucia Specia, and at the Department of Computing at Queen Mary University of London. Beyond teaching for university students she has developed and delivered a series of courses for pre-university level students (18-19 years, Oxford summer courses), as well as industry practitioners; for example the online ResponsibleAI course at The Alan Turing Institute.

Teaching

  • 2022 – 2024 MSc, Neural Networks & NLP, Queen Mary University of London
  • 2021 – 2024 MSc, Artificial Intelligence, Queen Mary University of London
  • 2020 – 2021 MSc, NLP (lectures on text classification and Transformers), Imperial College London, module organiser: Prof Lucia Specia

  • ResponsibleAI course at Turing. For the interested experts, a key outcome is being familiar with the main techniques of designing explainable (xAI) and transparent systems and being able to use them in practice for AI; for AI practitioners in particular, another key outcome is being able to build NLP classification models which explain their decisions using natural language (Github).
Picture of Mariia Ignashina

Mariia has a Master of Science with Distinction in Computer Science, Queen Mary University of London, and a Bachelor of Science in Natural Language Processing, Higher School of Economics University.

Mariia Ignashina
Hanging out with friends

Mateusz has a Bachelor in Electronic & Electrical Engineering, University College London, and is studying towards a PhD in Electronic Engineering, Queen Mary University of London. He is currently working on Zero shot text anonymisation using large language models, open ended search to generate prompts, and alignment of foundation models.

Mateusz Dziemian
Hanging out with friends

Vishal has a Masters in Artificial Intelligence in Computer Vision and Robotics with Distinction, and a Bachelors in Computer Science from RGPV University India. He is currently working on Identifying and mitigating bias in EHRs using generative AI, Tracking progress in Cognitive behaviour therapy in subject to physical activity / exercises, and Understanding the relationship between Understanding privacy and its impact on chatbots.

Vishal Yadav

Collaborators