Systems biology involves the study of an organism as one single system. Instead of analysing all the individual components that make up a cell, the cell is instead viewed as an interacting network of genes, proteins and biochemical reactions and these are studied as a whole. In 20th century, molecular biology was focused upon. A ‘reductionist’ approach was followed, in which the individual components, such as the cell nucleus or sugar metabolism, were studied in isolation. However, we have progressed to an era where systems biology plays a leading role. A ‘holistic’ approach is followed, the components and their interaction are studied simultaneously. These cellular interactions are ultimately responsible for an organisms form and function. For example, if you look at the human immune system, it’s role is not defined by one single cellular component or mechanism. Instead, it is compromised of numerous genes, proteins, cells and mechanisms which work together to produce a response and fight of pathogens and disease. As science progressed in the past few years, tools and technologies were developed which allowed us to examine the foundations of biological activity-genes and proteins. It was learnt that these fundamental cellular components rarely act alone, either interacting with each other or other complex molecules. The systems approach looks at: The parts that make up the system & How these parts interact Placement of these interactions in terms of space and time i.e. where and when these interactions occur The technologies used for systems biology are high throughput in nature. The ‘omics’ technologies provide information on the parts of the systems. These include genomics (HT DNA sequencing), transcriptomics (gene chips, microarrays), proteomics (MS, 2D-PAGE, protein chips, yeast-2-hybrid) and metabolomics (NMR, X-ray). These technologies are still focused upon today, and the real challenge of systems biology in integration all the ‘omics’ data. Once this has been completed, this leads to a model, upon which perturbation experiments can be carried out. Currently, a common problem is that too much of the ‘omics’ data is qualitative. Ideally, one should be able to quantify the data obtained from ‘omics’ technologies. The holy grail of systems biology is a quantitative model which is able to predict the response of an entire system to any perturbation, and hopefully with continued effort and new discoveries, this will eventually be achieved.
The fundamental basis of genomics is determining the DNA sequence of the entire genome of an organism. The most common manner of determining a genome sequence is fragmenting the DNA, cloning the DNA into a suitable vector, analysis of the DNA sequence (i.e. what bases make up the sequence), assembly of the sequences into a single large molecule and filling in the gaps. There are a number of strategies in order to carry out the first steps of fragmenting and cloning the DNA. 1. Production of an ordered library
In summary, DNA is fragmented into smaller pieces in a manner and order you can predict and cloned into a large, suitable vector. It is then subcloned into multicopy vectors so you know what exactly you are sequencing. This will help later in assembly. Large fragments of DNA are cloned into a large vector. A vector commonly used for this a Bacterial Artificial Chromosome (BAC) vector. It is a small plasmid constructed from the functional fertility plasmid (F-plasmid) of E.coli. It possesses a number of regulatory genes which originated from the F-plasmid, including oriS, which mediates unidirection replication, and parA, which maintains the copy number at one or two. A low copy number is ideal in order to prevent recombination between DNA fragments in the vector. Furthermore, it possesses a chloramphenicol resistance marker. It is an extremely useful tool due to its ease of manipulation and its ability to stably maintain large fragments of...
Please join StudyMode to read the full document