Real-time Contactless Fall Detection Using Software Defined Radios

Supervisor: Dr Ahmad Taha

School: Engineering


Falls are the second leading cause of unintentional injury deaths worldwide, as well as a major cause of distress, pain, injury, loss of confidence and loss of independence. In England, around a third of people aged 65+ and around half of the people aged 80+ fall at least once a year. Detecting the falls can be lifesaving, especially if the person becomes unconscious or immobilized. Several technologies have been investigated for fall detection, such as multisensory based, radarbased, wearables, vision based, and using Channel State Information (CSI) extracted from Software Defined Radios (SDR) systems. However, some drawbacks and limitations exist that needs to be addressed. For example, vision-based systems raise several privacy challenges. Wearable systems must be on the user, which can be restricting, and uncomfortable, and radar-based systems can be complicated and costly to install.


This project will therefore aim to develop a contactless fall detection system using SDRs and utilising cutting-edge Artificial Intelligence (AI) algorithms. The envisioned system will be capable of detecting whether an individual subject is present within the sensing area, or if the subject is falling, and, finally, if the subject is performing one of three other activities, including sitting, standing, and walking. A dashboard will be developed as part of the project to visualise the detected activities and alert the carers in case of critical events. The system will be designed to separately detect in what direction the fall is. The motive behind this feature of the system is providing an option to remote carers to identify if the fall could mean hitting a sharp edge for example, and hence know in advance how critical the fall is. To do so, the following objectives will need to be fulfilled:



• Objective 1 – Data Collection Stage: In this stage, the SDRs, particularly the USRP X300 devices will be configured to work on WiFi frequency (2.4 GHz) and used to collect CSI data depicting all activities, i.e., falling, walking, sitting, standing, and others, to train the Machine Learning (ML) algorithms. See Figure 1 for a vision of the setup. – (2 weeks)


• Objective 2 – AI Stage: Develop ML algorithms to accurately detect, in real-time, the performed activity using cutting edge classification algorithms and AI techniques. (3 to 4 weeks)


• Objective 3 – System Integration and Data Analytics Stage: Develop a dashboard that reports the activities and generates alerts based on critical events, e.g. falling. The dashboard will be interfaced with the AI machine to view in real-time the intelligent decisions taken by it that will indicate the detected activities. (3 to 4 weeks) As a future direction, the envisaged system can be further extended to include more activities and perform testing in a realistic environment in partnership with beneficiaries such as the NHS and housing societies to monitor their patients, remotely.