Developers

Student

Sitthatka Jaratsaeng

Co-Advisor

Narongrit Kasemsap

Research Consultant

Anchalee Techasen

Advisor

Thanapong Intharah

Main Content

The main content of this study consists of three primary components.

Report full Thesis (Thai version)

The complete thesis manuscript, presenting the theoretical background, research methodology, up to discussion of findings.

Publication KST'2026

A paper manuscript derived from the thesis, focusing on the proposed predictive framework and its experimental validation for academic dissemination.

Mobile App

A mobile application developed using Expo Go to facilitate self-recorded video data collection, which serves as input data for the proposed predictive models.

Abstract

Parkinson’s disease (PD) is a neurodegenerative disorder characterized by motor impairment and reduced facial expressiveness. This study aimed to evaluate the potential of using facial video data captured solely from a smartphone front camera from a total of 40 participants to predict Parkinson’s disease, disease severity (Hoehn & Yahr), and motor examination score (MDS-UPDRS Part III).

In this study, three predictive models were developed and evaluated, along with five video processing methods and four facial action scenarios. For Parkinson’s disease classification, the best performance was achieved using frame-skipping combined with Principal Component Analysis (PCA) retaining 95% variance and smile videos with the XGBoost model, yielding an accuracy of 0.77, precision of 0.75, and recall of 0.86. For predicting disease severity (Hoehn & Yahr), the lowest Root Mean Squared Error (RMSE = 1.3136), Mean Absolute Percentage Error (MAPE = 38.36%), and Coefficient of Determination (R² = 0.2421) were obtained using smile videos with frame-skipping without PCA through the Random Forest model. For predicting the motor examination score (MDS-UPDRS Part III), the best performance was achieved using smile videos processed by averaging every three frames combined with PCA (95% variance) and the XGBoost model, resulting in RMSE = 19.3365, MAPE = 45.33%, and R² = 0.1073.

Regarding feature importance, both the Parkinson’s disease classification model and the disease severity prediction model shared four common Action Units: upper lip raiser, lip corner raiser, lip corner depressor, and lips part. These findings confirm that facial information has strong potential as a meaningful signal for Parkinson’s disease assessment and may be further developed into an easy-to-use smartphone-based tool. Such an approach could reduce the clinical assessment burden while maintaining a high level of predictive performance.

Methodology

Methodology Diagram

This figure presents the overall framework of the proposed methodology. Facial videos were collected using a smartphone front camera from 40 participants. The videos were processed using five different preprocessing techniques, including frame-skipping and frame averaging, with optional dimensionality reduction using Principal Component Analysis (PCA). Extracted facial features and Action Units were then used to train three machine learning models (Random Forest, Support Vector Machine, and XGBoost) for Parkinson’s disease classification and prediction of disease severity (Hoehn & Yahr) and motor examination score (MDS-UPDRS Part III). Model performance was evaluated using classification and regression metrics, including Accuracy, Precision, Recall, RMSE, MAPE, and R².