Feature article: Challenges in post genomic molecular medicine and clinical trials
Challenges in post genomic molecular medicine and clinical trials
Medical doctors have long known that people differ in susceptibility to disease and response to medicines. But, with little understanding on what causes these differences and little guidance on how to best account for them, treatments have been optimised for the many, not for the few.
However, this classical approach to medicine has been changing, and the genomic and post-genomic ?revolution? is providing scientific bases for individualising treatments. Nowadays, we know that human DNA codes for more than 20,000 genes. Each person?s overall ?blueprint? is basically similar, made up of about 3 billion ?letters? of code, each letter corresponding to a chemical subunit of the DNA molecule. But subtle variation in the DNA gives humans their individual identities.
Beyond physical appearance, genes determine the distinct and complex way our bodies interact and respond to the environment. The chemistry and biology taking place in our bodies at various levels (what we can call the ?molecular signatures? of individuals) sometimes predispose people to particular diseases, and it can affect the way a person responds to therapies. These concepts form the basis of a relatively new scientific field called Molecular Medicine (MM). MM?s final aim is to be able to ?personalise? medical treatments by using genomic (gene), proteomic (proteins) and eventually metabolomic (biochemical reactions) information to understand how a patient would react to a therapy and select the correct therapy for him or her. Already, there are applications of MM. For example, for breast cancers variants of a gene linked to it can be the index of susceptibility to developing or surviving the disease, whereas the production of a particular protein signals that it might be controlled with the drug Herceptin. In this vein, several molecular signatures have been developed to predict prognosis and drug response in the context of breast cancer (reviewed in [1]).
Two pillars of MM are embedded in the name itself of this research field: medical science and molecular biology. All those interested in MM know its popular and effective motto: "From the bench to the bedside", however only a small fraction of them ask themselves: "How is the knowledge moved from the former to the latter??. And this is by no means a minor question.
Although there exist some real practical examples where this is really happening [1], there remain difficulties in bridging the gap between the two. However, we argue here that this process could be immensely helped by a greater understanding and deployment of a third, and equally important, pillar: Computational Biosciences. In this term are included topics such as Medical Computer Science, Bio-informatics, Systems Biology and literature mining that together will contribute to organizing, understanding, standardizing and translating the knowledge accumulated in the laboratories to the clinic.
Some readers who have a purely-experimental view of the cell might be amazed to read that one of the main pillars of the medicine of the future is based on numbers, equations and inter-connecting computers. In fact, in traditional medicine the qualitative and "classical laboratory" data (e.g. jaundice, high level of transaminases, tiredness) of a single patient enables the physician to find out the correct diagnosis thinking of differential diagnosis and performing further examinations and parameters. Based on the heterogeneity and mass of information the implementation of complex molecular findings in the diagnostic and treatment process can not be realized without the help of computational bioscience and mechanisms to make the MM efficiently useful for the treatment of the patient.
On the contrary, the data from current MM are inherently complex, heterogeneous and often produced in different laboratories from the one the patient is treated at. There is a lack of ways and networks to transfer, organize, integrate and then decipher this data; however, it is exactly the complexity of this data that sometimes allows a deeper understanding of the chemistry and biology occurring in our bodies.
The new challenges posed by MM can only be properly addressed by large collaborative efforts where researchers from many disciplines, from geneticists and clinical specialists, to computer scientists and engineers, share knowledge and work together. Based on the fact that researchers of different disciplines work together in a combined approach with sensitive data and, even sometimes indirect, treatment of humans, attention has to be turned to the ethical and legal guidelines and standards to protect the patients. Observance of ethical and legal issues in MM is mandatory regarding the rapidly increasing developments.
One engineering challenge is developing better systems to rapidly assess a patient?s genetic profile; another is collecting and managing massive amounts of data on individual patients; and yet another is the need to create less expensive and efficient diagnostic devices.
In addition, improved drug development and system biology methods are necessary to find effective and safe drugs that can exploit the new knowledge on the differences between individuals. New technologies are needed for delivering personalized drugs quickly and efficiently to the site in the body where the disease is localized. For instance, research is being carried out to engineer nanoparticles that are capable of intelligently delivering a drug to its target in the body
Information and Communications technology is playing an increasingly critical role in health and life sciences due to the profound expansion in the scope of research projects and clinical needs in the post-genomic age. Robust data management and analysis systems are becoming essential enablers of MM. Many efforts are underway to develop standards and technologies to promote large-scale integration of publicly-available resources systems and databases. Predicted benefits include an enhanced ability to conduct meta-analyses, an increase in the usable lifespan of data, a funding agency-wide reduction in the total cost of IT infrastructure, and an increased opportunity for the development of third party software tools.
We think that, many of these cogent issues might find a preliminary tentative framework in the ACGT project [2]. In fact, ACGT focuses its research and development efforts in defining a framework for clinical-genomic trials where properly collected data stored in a unique user friendly and easily accessible virtual database (masking different geographically distributed archives), will be amenable to parallel net-based bio-informatics and sophisticated computation bio-statistics (based on the GRID framework). Once this data has been collected, analyzed and validated, it will be possible to build mathematical models, which describe, in a unified way both the molecular (e.g. the metabolomics) and the macroscopic (e.g. the pathological tissue growth) components. Realistic multi-scale models, such as the Oncosimulator, with easy-to-use software systems will help the oncologists in their work. These support tools will be used both, in research work, and in the future, in practical clinical work, not only to better understand the mechanisms of action of drugs and scheduling, but also in designing new clinico-genomic trials.
The key point of the ACGT project is that it proposes a robust general framework where new specific tools can be "simply" inserted. This point is of interest since new and robust biology-oriented mathematical algorithms are being produced in the research community.
Moreover, patients safety, data protection and strict clinical evaluation of each developed software that is related directly/ or indirectly to treatment of patients is essential. The aim of ACGT is to realize an interactive community of specialists that provide an end-user friendly and easy to use platform used for all disciplines.
However, on this specific point we have an important final remark: we do not want by any means to say that the oncologist of the future will have to be a computer scientist. Simply, she/he will have to work more closely with the computer scientist community and to be more familiar with computational tools. Note that this already happens to some extent in various fields of medicine. For example, a cardiologist must be familiar with the physics of the heart and must be able to use physical devices such as ECG; a radio-oncologist must have elementary knowledge of radiation physics and she/he must be able to discuss a radiotherapy computed dose plan with physicists.
"Patients safety, data protection and strict clinical evaluation of each developed software that is related directly/ or indirectly to treatment of patients is essential. The aim of ACGT is to realize an interactive community of specialists that provide an end-user friendly and easy to use platform used for all disciplines"
References:
[1] Sotiriou C and Piccart M. Taking gene-expression profiling to the clinic: when will molecular signatures become relevant to patient care? Nat Rev Cancer. 2007 ; 7:545-553
[2] M. Tsiknakis et. Al. "Developing a European Grid infrastructure for cancer Research: vision, architecture and services", E-Cancermedical sciences (2007)