Adarsh
Dr. Adarsh Valoor

Hey, I'm Adarsh Valoor

Post Doctoral Research Associate @ University of Southampton
Researcher - Responsible AI UK

UPDATES

BIOGRAPHY

I am currently working as a Post-Doctoral Research Associate at the Agents, Interaction and Complexity Group within the Department of Electronics and Computer Science at the University of Southampton. I am part of the Responsible AI UK with Prof. Gopal Ramchurn.

Previously, I was a PhD student at the National Institute of Technology, Thiruchirapalli, India in the Department of Computer Applications with Prof. G. R. Gangadharan. During my doctoral journey, I was honored to be a DST-Inspire Fellow. I also hold a Bachelor of Science in Physics from NSS College Ottapalam, University of Calicut, and a Master of Science in Computer Science from Central University of Tamil Nadu.

My research primarily focuses on the application of artificial intelligence (AI) in medical diagnostics, with an emphasis on developing interpretable and responsible machine learning models for neurodegenerative diseases. Throughout my doctoral studies, I worked extensively on improving the accuracy and transparency of AI-based tools. This work has led to contributions in renowned journals such as Nature Scientific Reports and Machine Learning Journal. The goal of my research is to bridge the gap between cutting-edge AI technologies and their practical, responsible applications in healthcare.

PROJECTS These are some of the projects I'm currently working on

Software Screenshot

Off Grid AI Deployments

This project combines ethnographic, design, and AI engineering approaches to co-create off-grid AI solutions for communities in India and the U.K, and to develop vision and audio classifiers that function with intermittent power and connectivity, using these as tools in community co-design workshops. The goal is to uncover use cases, build prototypes with model reduction techniques, and enable local, community-driven updates to the models.

Funding : £68,000 (Part of RAI-UK India Sandpit Project)

Members : Dr. Neelima Sailaja (University of Nottingham), Dr. Thomas Reitmaier (Swansea University), Dr. Adarsh Valoor (University of Southampton) & Dr. Devika Jay (IIT Madras)

Navigating the Neural Nexus Through Explainable AI

This project explores the use of Artificial Intelligence (AI) in neuroinformatics to address challenges posed by neurological disorders, focusing on stress, depression, and Alzheimer’s Disease (AD). It emphasizes the development of explainable AI (XAI) methodologies, aiming to revolutionize diagnostics and therapy in these domains. By integrating graph convolutional networks, bias-adjusted machine learning models, and multimodal classification systems, the research advocates for personalized, precision medicine in neurology. A central theme is ensuring transparency and explainability in AI applications, bridging the gap between computational models and clinical practice to foster trust and enhance their adoption. Key contributions include models for detecting mental stress via heart rate variability on resource-constrained devices, identifying depression using bias-corrected social media analytics, and differentiating AD and Mild Cognitive Impairment through advanced imaging techniques. Additionally, it examines the causal link between chronic stress and AD.

Supervisor : Prof. G. R. Gangadharan

Software Screenshot

PUBLICATIONS

Software Screenshot

Next-Generation Phosgene Detection: Convolutional Neural Network with Triphenylamine and N-Salicylaldehyde Probes for Enhanced Sensitivity and Bioimaging

We developed a smartphone-based technique using Convolutional Neural Networks (CNNs) for real-time, portable phosgene detection. Unlike traditional fluorescence spectroscopy, which requires specialized equipment and expertise, this CNN-based approach is accessible and affordable and offers quick analysis, making it ideal for on-the-spot detection.

Authors : Abhijna Krishna. R, Adarsh Valoor, Shu-Pao Wu & Velmathi. S

A Case-based Counterfactual Methodology for Explainable Deep Learning

This study develops a novel methodology combining U-Net and GAN models to create comprehensive counterfactual diagnostic maps for AD. The proposed methodology uses case-based counterfactual reasoning for robust decision classification for counterfactual maps to understand how changes in specific features affect the model's predictions.

Authors : Adarsh Valoor & G. R. Gangadharan

Software Screenshot
Software Screenshot

Environmentally Sustainable Detection of Arsenic using Convolutional Neural Networks and Imidazole-Based Organic Probes: Application in Food Samples and Arsenic Album

This study presents a novel approach by developing two probes, A1 and A2, based on 4-diethylaminosalicyladehyde, 2-hydroxy-1-naphthaldehyde, and 1,2-diaminoanthraquinone. These probes are highly sensitive and selective for detecting arsenite (As(III)) and arsenate (As(V)) in water, food samples, and homeopathic medicine with limits of detection in the nanomolar range.

Authors : Abhijna Krishna. R, Adarsh Valoor & Velmathi. S

Mental Stress Detection from Ultra Short Heart Rate Variability using Explainable GCN with Network Pruning and Quantisation

Introduces a novel pruning approach based on explainable graph convolutional networks, aimed to tackle the complexities associated with existing machine learning and deep learning models for stress detection using USHRV analysis.

Authors : Adarsh Valoor & G. R. Gangadharan

Software Screenshot
Software Screenshot

Multimodal Classification of Alzheimer's Disease and Mild Cognitive Impairment using Custom MKSCDDL Kernel over CNN with Transparent Decision-making for Explainable Diagnosis

The study presents an innovative diagnostic framework that synergises Convolutional Neural Networks (CNNs) with a Multi-feature Kernel Supervised within-class-similar Discriminative Dictionary Learning (MKSCDDL). This integrative methodology is designed to facilitate the precise classification of individuals into categories of Alzheimer's Disease, Mild Cognitive Impairment (MCI), and Cognitively Normal (CN) statuses while also discerning the nuanced phases within the MCI spectrum.

Authors : Adarsh Valoor, G. R. Gangadharan, Ugo Fiore & Paolo Zanetti

Fair and Explainable Depression Detection in Social Media

The imbalance found in terms of participation in the various age groups and demographics is normalized using the one-shot decision approach. We present the intrinsic explainability in conjunction with noisy label correction approaches, offering an innovative solution to the problem of distinguishing between depression symptoms and suicidal ideas.

Authors : Adarsh Valoor, Arun Kumar. P, Lavanya. V, G. R. Gangadharan

Software Screenshot
Software Screenshot

Applying Explainable Artificial Intelligence Models for Understanding Depression Among IT Workers

Artificial Intelligence (AI) systems are getting better and better as each day goes on, but due to the increased complexity of the models that are being used, we are unable to understand how these decisions are being made by the system. Explainable Artificial Intelligence (XAI) is a subfield of AI that aims to provide intelligible explanations to the end-user. This study evaluates people who are at risk of mental illness and detects early signs of depressive symptoms, using XAI approaches.

Authors : Adarsh Valoor & G. R. Gangadharan

MENTORS

Software Screenshot

Prof. G. R. Gangadharan

Prof. G. R. Gangadharan is a Professor at the National Institute of Technology (NIT), Tiruchirappalli, India. His research focuses on the intersection of technology and business perspectives, addressing critical challenges at this interface.

Prof. Gangadharan has made significant contributions to the academic community as a co-editor of the book Harnessing Green IT: Principles & Practices (Wiley-IEEE Press, 2012). He is a Senior Member of both the IEEE and ACM. Additionally, he serves as an Associate Editor for IEEE Transactions on Artificial Intelligence (TAI) and Sadhana. He aslo serves as Member of BIS, India.

Software Screenshot

Prof. Gopal Ramchurn

Professor Gopal Ramchurn is a distinguished Professor of Artificial Intelligence in the School of Electronics and Computer Science at the University of Southampton. He serves as the CEO of Responsible AI UK, a £31M initiative dedicated to building and supporting an international ecosystem for responsible artificial intelligence. In addition, he is the Director of the UKRI Trustworthy Autonomous Systems Hub, which serves as the focal point of the £33M UKRI Trustworthy Autonomous Systems Programme. A Turing Fellow at the Alan Turing Institute, Professor Ramchurn is also a Fellow of the Institution of Engineering and Technology, reflecting his significant contributions to the fields of AI and engineering.

Professor Ramchurn extends his expertise to the industry as the Co-CEO of Empati Ltd, an AI-driven startup focused on managing large-scale decentralized green hydrogen technologies. This role is grounded in his extensive experience in developing AI algorithms for smart grids and utilizing satellite data to monitor renewable energy assets, driving innovation in sustainable energy management and technology.

Software Screenshot

Dr. Ugo Fiore

Dr. Ugo Fiore is currently affiliated with the Department of Computer Science at the University of Salerno. Previously, he held academic positions at Parthenope University and Federico II University. Dr. Fiore's research spans diverse fields, including deep learning, optimization, nonlinear analysis, and network security, reflecting a commitment to exploring both foundational and applied aspects of these domains.

In addition to their research, Dr. Fiore contributes significantly to the academic community through various editorial roles. He serve as an Associate Editor for Applied Soft Computing (Elsevier), Soft Computing (Springer), and the Journal of Banking and Financial Technology (Springer). Furthermore, He is an editorial board member for the Journal of High Speed Networks (IOS Press), the International Journal of Soft Computing and Networking (Inderscience), the Journal of Sensor and Actuator Networks (MDPI), and Risks (MDPI).