Dr Jake Lever
- Lecturer (School of Computing Science)
- Affiliate (School of Cancer Sciences)
I am a Lecturer (Assistant Professor) in the Information, Data and Analysis section in the School of Computing Science. My research focuses on extracting biomedical knowledge from published research literature using different natural language processing and machine learning methods. This helps researchers to find important research knowledge and paves the way for representing biological knowledge computationally that artificial intelligence can reason on. I’ve focused in areas of precision medicine which tries to tailor treatment to an individual patient’s genetics and frequently relies on the latest research findings.
Before I moved to Glasgow, I spent two years at Stanford University as a postdoctoral researcher in the Helix Group. I received my Ph.D. in Bioinformatics from the University of British Columbia in Vancouver, Canada where I undertook research at Canada’s Michael Smith Genome Sciences Centre at BC Cancer. I completed my B.Eng. degree at the University of Edinburgh in Software Engineering.
More about my work and information about working with me can be found on my personal website.
- Natural language processing, especially in biomedicine
- Information extraction & retrieval
- Knowledge bases & knowledge inference
- Biomedical applications of machine learning
- Bioinformatics & computational biology
“Hey Siri, what drug should we try in our next clinical trial?” — a future doctor
There are many problems to solve before a computer could rationally answer that question. It would need to read and understand basic biomedical knowledge along with the latest research. It would need to be able to reason intelligently and balance the strength of evidence for different research ideas. Finally it would need to be able to explain itself clearly with substantial evidence, before any doctor or patient would consider taking an idea directly from an artificial intelligence. I am interested in chipping away at some of these problems to encourage researchers to use machine learning to read and work with the latest biomedical research.
I have initially focused on the first challenge of using machine learning to read biomedical research and extract knowledge. The vast scale of biomedical research is hard to comprehend and challenging to keep up with, especially for interdisciplinary researchers working across fields. Machine learning tools are absolutely necessary to help researchers digest the papers they need to read in order for them to develop new exciting research hypotheses.
Biomedical research relies heavily on databases of the latest biomedical knowledge kept in a structured, well-curated format. Unfortunately these are highly costly to maintain in terms of time and money. I have developed methods for building these databases directly from the literature (CancerMine) or to help database curators to find relevant papers themselves (CIViCmine and PGxMine). These resources have focused in the area of precision medicine, which predicts the best treatment for a patient given their unique genetics.