Dr Edmond S. L Ho

  • Senior Lecturer in Machine Learning (School of Computing Science)

Biography

Edmond Shu-lim Ho is currently a Senior Lecturer in the School of Computing Science (IDA-Section) at the University of Glasgow, Scotland, UK. Prior to joining the University of Glasgow in 2022, he was an Associate Professor in the Department of Computer and Information Sciences at Northumbria University, Newcastle upon Tyne, UK (2016-2022) and a Research Assistant Professor in the Department of Computer Science at Hong Kong Baptist University (2011-2016). He received the BSc degree in Computer Science from the Hong Kong Baptist University, the MPhil degree from the City University of Hong Kong, and the PhD degree from the University of Edinburgh.

Research interests

My main research focuses on Machine Learning based approaches for solving problems in Computer Vision and Computer Graphics, with the main focus on analyzing and modelling human data captured from visual sensors. Such research topics provided solutions to a wide range of research problems, including human activity understanding, person re-identification, pose estimation and motion correction, character animation, motion retrieval, emotion analysis from body gestures and facial expressions.

Publications

List by: Type | Date

Jump to: 2023 | 2022 | 2021
Number of items: 19.

2023

Men, Q., Ho, E. S.L. , Shum, H. P.H. and Leung, H. (2023) Focalized contrastive view-invariant learning for self-supervised skeleton-based action recognition. Neurocomputing, 537, pp. 198-209. (doi: 10.1016/j.neucom.2023.03.070)

Chen, S., Atapour-Abarghouei, A., Ho, E. S.L. and Shum, H. P.H. (2023) INCLG: inpainting for non-cleft lip generation with a multi-task image processing network. Software Impacts, (doi: 10.1016/j.simpa.2023.100517) (In Press)

Crosato, L., Shum, H. P.H., Ho, E. S.L. and Wei, C. (2023) Interaction-aware decision-making for automated vehicles using social value orientation. IEEE Transactions on Intelligent Vehicles, 8(2), pp. 1339-1349. (doi: 10.1109/TIV.2022.3189836)

Hu, P., Ho, E. S.L. and Munteanu, A. (2023) Alignbodynet: deep learning-based alignment of non-overlapping partial body point clouds from a single depth camera. IEEE Transactions on Instrumentation and Measurement, 72, 2502609. (doi: 10.1109/TIM.2022.3222501)

2022

Goel, A., Men, Q. and Ho, E. S. L. (2022) Interaction mix and match: synthesizing close interaction using conditional hierarchical GAN with multi-hot class embedding. Computer Graphics Forum, 41(8), pp. 327-338. (doi: 10.1111/cgf.14647)

Hartley, J., Shum, H. P. H., Ho, E. S. L. , Wang, H. and Ramamoorthy, S. (2022) Formation control for UAVs using a Flux Guided approach. Expert Systems with Applications, 205, 117665. (doi: 10.1016/j.eswa.2022.117665)

Ho, E. S. L. , McCay, K. D., Marcroft, C. and Embleton, N. D. (2022) PCPP: a MATLAB application for abnormal infant movement detection from video. Software Impacts, 14, 100412. (doi: 10.1016/j.simpa.2022.100412)

Zhang, H., Ho, E. S.L. and Shum, H. P.H. (2022) CP-AGCN: Pytorch-based attention informed graph convolutional network for identifying infants at risk of cerebral palsy. Software Impacts, 14, 100419. (doi: 10.1016/j.simpa.2022.100419)

Zhu, M., Men, Q., Ho, E. S.L., Leung, H. and Shum, H. P.H. (2022) A two-stream convolutional network for musculoskeletal and neurological disorders prediction. Journal of Medical Systems, 46(11), 76. (doi: 10.1007/s10916-022-01857-5) (PMID:36201114) (PMCID:PMC9537228)

Nozawa, N., Shum, H. P. H., Feng, Q., Ho, E. S. L. and Morishima, S. (2022) 3D car shape reconstruction from a contour sketch using GAN and lazy learning. Visual Computer, 38(4), pp. 1317-1330. (doi: 10.1007/s00371-020-02024-y)

Hu, P., Ho, E. S.-L. and Munteanu, A. (2022) 3DBodyNet: fast reconstruction of 3D animatable human body shape from a single commodity depth camera. IEEE Transactions on Multimedia, 24, pp. 2139-2149. (doi: 10.1109/TMM.2021.3076340)

McCay, K. D., Hu, P., Shum, H. P. H., Lok Woo, W., Marcroft, C., Embleton, N. D., Munteanu, A. and Ho, E. S. L. (2022) A pose-based feature fusion and classification framework for the early prediction of cerebral palsy in infants. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 30, pp. 8-19. (doi: 10.1109/TNSRE.2021.3138185) (PMID:34941512)

Thakur, D., Biswas, S., Ho, E. S. L. and Chattopadhyay, S. (2022) ConvAE-LSTM: convolutional autoencoder long short-term memory network for smartphone-based human activity recognition. IEEE Access, 10, pp. 4137-4156. (doi: 10.1109/ACCESS.2022.3140373)

2021

Men, Q., Ho, E. S. L. , Shum, H. P. H. and Leung, H. (2021) A quadruple diffusion convolutional recurrent network for human motion prediction. IEEE Transactions on Circuits and Systems for Video Technology, 31(9), pp. 3417-3432. (doi: 10.1109/TCSVT.2020.3038145)

Chan, J. C. P. and Ho, E. S. L. (2021) Emotion transfer for 3D hand and full body motion using StarGAN. Computers, 10(3), 38. (doi: 10.3390/computers10030038)

Wang, H., Ho, E. S. L. , Shum, H. P. H. and Zhu, Z. (2021) Spatio-temporal manifold learning for human motions via long-horizon modeling. IEEE Transactions on Visualization and Computer Graphics, 27(1), pp. 216-227. (doi: 10.1109/TVCG.2019.2936810) (PMID:31443030)

Hammad, M., Iliyasu, A. M., Subasi, A., Ho, E. S. L. and Abd El-Latif, A. A. (2021) A multitier deep learning model for arrhythmia detection. IEEE Transactions on Instrumentation and Measurement, 70, 2502809. (doi: 10.1109/TIM.2020.3033072)

Kar, A., Pramanik, S., Chakraborty, A., Bhattacharjee, D., Ho, E. S. L. and Shum, H. P. H. (2021) LMZMPM: Local Modified Zernike Moment per-unit Mass for robust human face recognition. IEEE Transactions on Information Forensics and Security, 16, pp. 495-509. (doi: 10.1109/TIFS.2020.3015552)

Sakkos, D., Mccay, K. D., Marcroft, C., Embleton, N. D., Chattopadhyay, S. and Ho, E. S. L. (2021) Identification of abnormal movements in infants: a deep neural network for body part-based prediction of cerebral palsy. IEEE Access, 9, pp. 94281-94292. (doi: 10.1109/ACCESS.2021.3093469)

This list was generated on Sun May 28 18:13:04 2023 BST.
Jump to: Articles
Number of items: 19.

Articles

Men, Q., Ho, E. S.L. , Shum, H. P.H. and Leung, H. (2023) Focalized contrastive view-invariant learning for self-supervised skeleton-based action recognition. Neurocomputing, 537, pp. 198-209. (doi: 10.1016/j.neucom.2023.03.070)

Chen, S., Atapour-Abarghouei, A., Ho, E. S.L. and Shum, H. P.H. (2023) INCLG: inpainting for non-cleft lip generation with a multi-task image processing network. Software Impacts, (doi: 10.1016/j.simpa.2023.100517) (In Press)

Crosato, L., Shum, H. P.H., Ho, E. S.L. and Wei, C. (2023) Interaction-aware decision-making for automated vehicles using social value orientation. IEEE Transactions on Intelligent Vehicles, 8(2), pp. 1339-1349. (doi: 10.1109/TIV.2022.3189836)

Hu, P., Ho, E. S.L. and Munteanu, A. (2023) Alignbodynet: deep learning-based alignment of non-overlapping partial body point clouds from a single depth camera. IEEE Transactions on Instrumentation and Measurement, 72, 2502609. (doi: 10.1109/TIM.2022.3222501)

Goel, A., Men, Q. and Ho, E. S. L. (2022) Interaction mix and match: synthesizing close interaction using conditional hierarchical GAN with multi-hot class embedding. Computer Graphics Forum, 41(8), pp. 327-338. (doi: 10.1111/cgf.14647)

Hartley, J., Shum, H. P. H., Ho, E. S. L. , Wang, H. and Ramamoorthy, S. (2022) Formation control for UAVs using a Flux Guided approach. Expert Systems with Applications, 205, 117665. (doi: 10.1016/j.eswa.2022.117665)

Ho, E. S. L. , McCay, K. D., Marcroft, C. and Embleton, N. D. (2022) PCPP: a MATLAB application for abnormal infant movement detection from video. Software Impacts, 14, 100412. (doi: 10.1016/j.simpa.2022.100412)

Zhang, H., Ho, E. S.L. and Shum, H. P.H. (2022) CP-AGCN: Pytorch-based attention informed graph convolutional network for identifying infants at risk of cerebral palsy. Software Impacts, 14, 100419. (doi: 10.1016/j.simpa.2022.100419)

Zhu, M., Men, Q., Ho, E. S.L., Leung, H. and Shum, H. P.H. (2022) A two-stream convolutional network for musculoskeletal and neurological disorders prediction. Journal of Medical Systems, 46(11), 76. (doi: 10.1007/s10916-022-01857-5) (PMID:36201114) (PMCID:PMC9537228)

Nozawa, N., Shum, H. P. H., Feng, Q., Ho, E. S. L. and Morishima, S. (2022) 3D car shape reconstruction from a contour sketch using GAN and lazy learning. Visual Computer, 38(4), pp. 1317-1330. (doi: 10.1007/s00371-020-02024-y)

Hu, P., Ho, E. S.-L. and Munteanu, A. (2022) 3DBodyNet: fast reconstruction of 3D animatable human body shape from a single commodity depth camera. IEEE Transactions on Multimedia, 24, pp. 2139-2149. (doi: 10.1109/TMM.2021.3076340)

McCay, K. D., Hu, P., Shum, H. P. H., Lok Woo, W., Marcroft, C., Embleton, N. D., Munteanu, A. and Ho, E. S. L. (2022) A pose-based feature fusion and classification framework for the early prediction of cerebral palsy in infants. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 30, pp. 8-19. (doi: 10.1109/TNSRE.2021.3138185) (PMID:34941512)

Thakur, D., Biswas, S., Ho, E. S. L. and Chattopadhyay, S. (2022) ConvAE-LSTM: convolutional autoencoder long short-term memory network for smartphone-based human activity recognition. IEEE Access, 10, pp. 4137-4156. (doi: 10.1109/ACCESS.2022.3140373)

Men, Q., Ho, E. S. L. , Shum, H. P. H. and Leung, H. (2021) A quadruple diffusion convolutional recurrent network for human motion prediction. IEEE Transactions on Circuits and Systems for Video Technology, 31(9), pp. 3417-3432. (doi: 10.1109/TCSVT.2020.3038145)

Chan, J. C. P. and Ho, E. S. L. (2021) Emotion transfer for 3D hand and full body motion using StarGAN. Computers, 10(3), 38. (doi: 10.3390/computers10030038)

Wang, H., Ho, E. S. L. , Shum, H. P. H. and Zhu, Z. (2021) Spatio-temporal manifold learning for human motions via long-horizon modeling. IEEE Transactions on Visualization and Computer Graphics, 27(1), pp. 216-227. (doi: 10.1109/TVCG.2019.2936810) (PMID:31443030)

Hammad, M., Iliyasu, A. M., Subasi, A., Ho, E. S. L. and Abd El-Latif, A. A. (2021) A multitier deep learning model for arrhythmia detection. IEEE Transactions on Instrumentation and Measurement, 70, 2502809. (doi: 10.1109/TIM.2020.3033072)

Kar, A., Pramanik, S., Chakraborty, A., Bhattacharjee, D., Ho, E. S. L. and Shum, H. P. H. (2021) LMZMPM: Local Modified Zernike Moment per-unit Mass for robust human face recognition. IEEE Transactions on Information Forensics and Security, 16, pp. 495-509. (doi: 10.1109/TIFS.2020.3015552)

Sakkos, D., Mccay, K. D., Marcroft, C., Embleton, N. D., Chattopadhyay, S. and Ho, E. S. L. (2021) Identification of abnormal movements in infants: a deep neural network for body part-based prediction of cerebral palsy. IEEE Access, 9, pp. 94281-94292. (doi: 10.1109/ACCESS.2021.3093469)

This list was generated on Sun May 28 18:13:04 2023 BST.

Grants

Turing Network Development Award, EPSRC / The Alan Turing Institute, Award Lead and Proposal Lead, 2022

 

D-FOCUS: Drone-FOrmation Control for countering future Unmanned aerial Systems, The Ministry of Defence (DASA) - Defence and Security Accelerator (Ref: DSTLX-1000140725), PI, 2019-2020

 

Autonomous Monitoring for Patients and Older People using Smart Environments with Sensor Fusion, Royal Society Yusuf Hamied International Exchange Award (Ref: IES/R1/191147), PI, 2019-2022

 

Deep Learning in Computer Graphics and Virtual Reality, NVIDIA GPU Grant, PI, 2018

 

Shoes2Run - Wearable Technology, Creative Fuse North East, Co-Investigator (PI: Shoes2Run Limited, industrial partner), 2018

 

A Multi-resolution Spatial Relation based Representation for Close Character Interactions Analysis and Synthesis, RGC General Research Fund (RGC/HKBU210813), PI, 2013-2016

 

Modelling Human-Object Interactions based on Spatial Relations for Robust Action Recognition, NSFC Young Scientists Fund (Ref: 61302176), PI, 2013-2016

 

Modelling Temporal Structure for Robust and Efficient Human Action Recognition, HKBU Faculty Research Grant (FRG2/14-15/105), PI, 2015-2016

 

Monitoring Posture for Workplace Health and Safety with A Depth Camera, HKBU Faculty Research Grant (FRG2/13-14/092), PI, 2014-2015

 

Research on Efficient Multi-Character Motion Adaptation Based on A Multi-resolution Hierarchical Model for Spacetime Optimization, HKBU Faculty Research Grant (FRG2/12-13/078), PI, 2013-2014

 

Synthesizing Physically Valid Close Interactions for Controlling Humanoid Characters and Robots, HKBU Faculty Research Grant (FRG1/12-13/055), PI, 2013-2014

Supervision

I am currently looking for PhD students who are interested in Computer Vision, Computer Graphics and Machine Learning. Two potential project directions are listed below and I am open to other relevant topics as well. The candidate is expected to have strong programming skills, some prior experience in machine learning and visual computing (computer vision and/or computer graphics), and good English communication skills. Please contact me (Shu-Lim.Ho@glasgow.ac.uk) for further information.

 

1. Modelling Close Human-Human and Human-Object Interactions for Human Digitization

The aim of this project is to propose new methods for modelling the close interactions between human-human and human-object. Such an approach can be used for tackling problems in a wide range of tasks, including scene understanding, pose estimation and 3D human reconstruction in Computer Vision, as well as synthesizing interactive contents in Computer Graphics and Virtual Reality.

Analysing the relationships between human-human and human-object from images plays an important role in providing contextual information in addition to the low-level features (such as key points on the human and object). Although encouraging results are demonstrated by using data-driven and deep learning techniques in recent years, handling scenes which contains close interactions between human and objects is still a challenging task since the key entities (human(s) and object(s)) are usually partially occluded and resulted in low-quality input data. In this research, we will bridge this gap by utilising prior knowledge in close interactions to better model the human-human and human-object interactions.

The supervisory team has extensive experience in this area and the details of the relevant publications can be found here: http://www.edho.net/projects/close_interaction/

 

2. Early Prediction of Cerebral Palsy using Machine Learning and Computer Vision with Multimodal Data

The aim of this project is to propose new machine learning based framework for detecting abnormal infant movement from RGB videos. In particular, this project will focus on modelling the multimodal data collected from our NHS partners to improve the robustness and accuracy of the early prediction of Cerebral Palsy (CP).

CP is the collective term given to a group lifelong neurological conditions and the most prevalent physical disability found in children, with 2.11 diagnoses per 1000 live births. There is also an increased prevalence of CP in infants born prematurely, with 32.4 diagnoses per 1000 infants born very preterm (28-32 weeks gestation), and 70.6 diagnoses per 1000 infants born extremely preterm (<28 weeks gestation).

As such, the early diagnosis of CP is an ongoing area of multidisciplinary research, as it has the potential to allow for early intervention clinical care. However, early diagnosis can be difficult and time-consuming. Diagnostic tools such as the General Movements Assessment (GMA), have produced some very promising results. However, the prospect of automating these processes may improve the accessibility of the assessment and also enhance the understanding of the movement development of infants.

The supervisory team has extensive experience in this area and the details of the relevant publications can be found here: http://www.edho.net/projects/babies/

 

 

Current students:

  • Manli Zhu, Human action recognition with graph convolution (PhD student since 2020)

  • Shaun Lillie, Enhancing the learning experience of autistic students in Higher Education using AI and VR (PhD student since 2020)

  • Luca Crosato, Computer vision for autonomous vehicles (PhD student since 2020)

  • Daniel Organisciak, Neural attention mechanisms for robust and interpretable feature representation learning (PhD student since 2018, co-supervised)

 

Alumni:

  • Dr. Kevin McCay, Automated early prediction of cerebral palsy: interpretable pose-based assessment for the identification of abnormal infant movements (Graduated in 2022)

  • Dr. Dimitrios Sakkos, Video foreground segmentation with deep learning (Graduated in 2020)

  • Dr. Jingtian Zhang, Learning discriminative features for human motion understanding (Graduated in 2020, Co-supervised)

  • Dr. Yijun Shen, Human motion analysis and synthesis in computer graphics (Graduated in 2019, Co-supervised)

Additional information

My Personal Webpage: http://www.edho.net