We are focusing on developing text rewriting techniques that normalise, de-bias and de-identify text to build responsible AI models that will protect sensitive information of individuals. This includes Reinforcement Learning (RL) methods applied to Large Language Models (LLMs). Such models could serve as digital twins to monitor and improve mental health of vulnerable individuals. We also work on transparency of our models conveying their decisions in collaborative protocols via reporting uncertainty and bias estimates. This includes Bayesian Deep Learning techniques. We also work with integration of LLMs into agents in language acquisition problems, educational and healthcare contexts. Beyond mental health, we also work with legal data.
Séminaire proposé par Julia Ive, Queen Mary University, invitée à ETIS en juin 2024.
Read more..Dr Julia Ive represents the Responsible Ai UK Keystone project on Addressing Socio-technical Limitations of LLMs for Medical and Social Computing (AdSoLve) led by Prof Maria Liakata at CogX Los Angeles.
Read more..Dr Julia Ive and Professor Gianluca Sergi will be discussing what crisis points may have emerged recently and what AI governance structures might look like.
Read more..Dr Julia Ive will be presenting a seminar at Queen Mary University of London with David Leslie and Dr Isadora Cruxen.
Read more..Julia and Vishal presented the poster at the AI for Mental Health Monitoring workshop at the Fringe Event of Alan Turing Institute.
Read more..Julia and Mariia present the poster for Mitigating Bias in Pediatric Mental Health Notes via rewriting.
Read more..Dr Julia Ive will be speaking at the Regulating AI in Digital Mental Health Forum, AI Turing Fringe Event.
Read more..Dr Julia Ive is the lab lead and an expert in guiding foundation models for text generation with Reinforcement Learning (RL). Her track record includes a major scientific breakthrough in the domain of generating synthetic mental health text. She pioneered this in 2018 as a result of her pilot project with her colleagues from Kings College London, Cambridge and Oxford. The methodology stemming from the project was published in the prestigious high-rank Nature Digital Medicine journal. At Queen Mary University of London (QMUL), she has been the module organiser of the Artificial Intelligence course at MSc level. She has taught Neural Networks and Natural Language Processing both at Imperial College London, with Prof Lucia Specia, and at the Department of Computing at Queen Mary University of London. Beyond teaching for university students she has developed and delivered a series of courses for pre-university level students (18-19 years, Oxford summer courses), as well as industry practitioners; for example the online ResponsibleAI course at The Alan Turing Institute.
Mariia has a Master of Science with Distinction in Computer Science, Queen Mary University of London, and a Bachelor of Science in Natural Language Processing, Higher School of Economics University.
Mateusz has a Bachelor in Electronic & Electrical Engineering, University College London, and is studying towards a PhD in Electronic Engineering, Queen Mary University of London. He is currently working on Zero shot text anonymisation using large language models, open ended search to generate prompts, and alignment of foundation models.
Vishal has a Masters in Artificial Intelligence in Computer Vision and Robotics with Distinction, and a Bachelors in Computer Science from RGPV University India. He is currently working on Identifying and mitigating bias in EHRs using generative AI, Tracking progress in Cognitive behaviour therapy in subject to physical activity / exercises, and Understanding the relationship between Understanding privacy and its impact on chatbots.