Peer Review
It was this time in 2006 when Gordon brown announced his plans to overhaul the Research Assessment Exercise (RAE); the way in which the government decides on funding for science and technology research in the UK. There have been loud cries of alarm from all sides of the scientific community since then, ‘what’s wrong with the way we do things now,’ they ask?
The RAE is based on the system of Peer Review. It’s a ludicrous idea when one first hears about it – in what other subject would competing experts in a field, be allowed to critically review each others work, and advise the politicians how good it is? Some conflict of interest you would think, no? Yet this is the system the government has used for a hundred years and it’s also the way the editors of scientific journals decide which work they print - and which they bin.
It has been said of Peer Review that it is to Scientists like democracy was to Winston Churchill, that is, ‘the worst type of government, except all those other types that have been tried.’ So maybe the time has come to take a fresh look at Peer Review, and see if we can do any better.
The faults of Peer Review
We scientists can admit that the failings of Peer Review are not inconsiderable. The main objection (on the part of the government at least, who must foot the bill) is that task of coordinating the independent reviews is bureaucratic and costly. Journals have to pay the expenses this brings too, and they account for them by charging scientists large sums to read the intellectually valuable scientific goodies they contain. This seems somewhat unfair for hard-up developing countries and isn’t all that much fun for UK Universities struggling in the wake of the credit crunch either.
More fundamentally, Peer Review has been accused of slowing down the development of science – which is just not cricket. This is because well-known and well-respected experts (the people best placed to review a journal) can be old fashioned and loathe to accept radical new ideas which contradict with their accepted hypothesis. This means bright, radical, young 21st century Darwins (i.e. people with brilliant ideas which unfortunately go completely against the grain of the current accepted opinion) can go disappointingly unpublished.
Against Peer review is also the fact that it is no use whatsoever at detecting major fraud. If a researcher simply makes his graphs up, realistically, a reviewer (who could be on the other side of the world) can have no idea. This was exactly what happened in the memorable of case of Hwang Woo-Suk, the (ahem) celebrated Korean researcher and his work on the cloning of human embryonic stem cells. He published his work in the high impact journal Science in two ‘landmark’ papers in 2004 and 2005. His written experimental section, conclusions and experiments appeared totally sound; it was just a shame he never actually carried any of these out.
The fact is though, Peer Review does work well in 99% of cases – believe it or not. It is an excellent way of professionalizing and shaping up a paper; stopping the authors drawing rash conclusions, or over hyping their results. Reviewers can even offer input on a particular experiment which might prove the results more conclusively and make the research more convincing. Most importantly, scientists trust Peer Review (and indeed it is this mutual trust which allows the system to work at all) – changing the system will always be met with healthy scepticism.
Finally, we should remember that as Irene Hames, editor of The Plant Journal put it recently, ‘Peer Reviewed journals are not records of absolute truth, merely records of work carried out,’ and that in science you can only ever be right until someone proves you wrong.
The solutions
Browns new framework for dishing out cash to scientists, the research excellence framework (REF) will do away with all that filthy bureaucracy in one swipe, to replace it with a statistical system. Instead of using Peer review directly, the REF will generate a bibliometric evaluation of how good each research application is using figures such as the amount of publications a researcher has achieved in the past year say, or the amount of private funding they have acquired.
Many researchers argue this statistical approach is unfair; probably much worse than Peer Review ever has been. The thinking behind this objection being that great scientists could be given a poor rating if they have taken a career break (to have children or get over an illness say) and thus haven’t published enough work that year. Early-career researchers could loose out too, if they don’t make the breakthrough they need to get published before the REF comes around.
In my opinion, we clearly need to opt for including some form of Peer Review in the new procedure - it is vital in deciding scientific merit, and avoiding unfair prejudices.
How could we make Peer Review even better, though? To stop the those radical young scientists with great ideas getting sidelined from reputable journals, some have called for the introduction of so-called double blind reviewing, in which neither the reviewer nor the author of papers know the identity of the other. This might mean prejudice against radical newcomers is minimised.
What about the price of the journals? Could we open up science to the developing world and make the whole process more transparent to boot if we adopted an open access system? This would mean all journals were freely available to view (on the internet for example), and authors themselves would have to pay to have them published. This would be a radical reform indeed, and forcing scientists to pay to be published might lead to authors simply creating blogs of their work online, which would be equally free to view, but somehow less trustworthy.
In conclusion, it appears that Churchill was right. Peer Review might not be perfect, but it is the best idea we have. Perhaps we can eventually learn to see it as what it really is; the (ever so slightly flawed) arbiter of scientific quality.
To read more about Peer Review, find out how it works and join the debate try visiting:
//www.senseaboutscience.org.uk/index.php/site/project/29/
Tuesday, 16 December 2008
Sunday, 7 December 2008
Awful Organic?
This is an article I wrote around a year or so ago, which I sent off to 'Spark*', the University of Reading student newspaper (fortunately for me they'll publish just about anything!) I was really embarrassed with how arrogant my first draft sounded (can't believe I sent it off sounding like that...) but I've now given it a few tweaks so hopefully it sounds a lot less preachy this time around!
Awful Organic?
It seems like these days we can’t so much as walk down the street without some kind of advertisement presuming to tell us what we should and shouldn’t eat. The British public seem to be obsessed with food and as symptom of this it appears new, weird and wonderful eating disorders are appearing on an almost daily basis. My new favourite amusing, food-related condition is orthorexia; a state where sufferers are obsessed with eating only foods which they see as ‘pure’.
But what do we mean by pure? I guess different people have different definitions: Foods with a low GI, low saturated fat content, foods which include whole grains and food which is certified organic – and there are others. It’s organic food that I want to talk about in this article however because – I’m going to go ahead and say it - I think it’s (at the very best) extremely over-hyped.
Firstly, organic food is only ever organic if an accredited body, the best known being the SA (Soil Association), say it is. The SA was set up by Defra (the government department for rural affairs) and is generally well respected. One of their more recent items certified as a faux pas for organic foods though are scientifically trendy nanoparticles. Their view however is that any synthetic nanoparticles are banned whereas natural nanoparticles (such as soot, for example present in foods grown next to a power station) are deemed fine. Is it me or does this just not conjure up a picture of purity and wholesomeness? The trouble is they can pretty much create whatever list of acceptable chemicals they like and these may then be used on organic crops. The key word is always natural – as long as something is natural is can pretty much go onto organic food.
My view is that this rule of thumb seems a bit dodgy to say the least. This may make me sound like a bit of a heretic, but let me explain using an analogy! We use synthetic medicines to keep human bodies healthy and in the vast majority of cases these days we have sufficient scientific knowledge to make these medicines safe. If we were to use natural remedies to cure our ailments they would be in general not as effective. This is they key, because the same is true of plants. In general natural pesticides and fertilisers are much, much less effective than synthetic types. Don’t forget, this is not the 50s and we don’t use DDT anymore; agrochemical companies spend literally millions each year testing their products and ensuring they will do us no harm and that they are so effective that the amount needed for several hectares of lands can be quoted in grams.
With the global credit crunch and the fact that in lots of regions of the world there are clearly not enough crops to feed the population, is such a wasteful method of farming as organic really ethical? The only advantage it appears to yield is a vague warm feeling that when we pay that extra 50p for our carrots we are somehow doing the environment good and getting healthier produce. Is it worth it?
Ok, so I’ve given a pretty negative view of organic farming, and it’s true that conventional methods aren’t exactly perfect either. The fact that they create monocultures which reduce biodiversity is clearly not their most redeeming feature. And of course it takes many a long year, and lot of money and a lot of energy to take a pesticide or herbicide from conception to market, so in the process of making farming more efficient in this way we are also stamping down with a large to, frankly, enormous sized carbon footprint.
Read more about organic food and see what the Soil Association have to say for themselves at http://www.soilassociation.org/
Awful Organic?
It seems like these days we can’t so much as walk down the street without some kind of advertisement presuming to tell us what we should and shouldn’t eat. The British public seem to be obsessed with food and as symptom of this it appears new, weird and wonderful eating disorders are appearing on an almost daily basis. My new favourite amusing, food-related condition is orthorexia; a state where sufferers are obsessed with eating only foods which they see as ‘pure’.
But what do we mean by pure? I guess different people have different definitions: Foods with a low GI, low saturated fat content, foods which include whole grains and food which is certified organic – and there are others. It’s organic food that I want to talk about in this article however because – I’m going to go ahead and say it - I think it’s (at the very best) extremely over-hyped.
Firstly, organic food is only ever organic if an accredited body, the best known being the SA (Soil Association), say it is. The SA was set up by Defra (the government department for rural affairs) and is generally well respected. One of their more recent items certified as a faux pas for organic foods though are scientifically trendy nanoparticles. Their view however is that any synthetic nanoparticles are banned whereas natural nanoparticles (such as soot, for example present in foods grown next to a power station) are deemed fine. Is it me or does this just not conjure up a picture of purity and wholesomeness? The trouble is they can pretty much create whatever list of acceptable chemicals they like and these may then be used on organic crops. The key word is always natural – as long as something is natural is can pretty much go onto organic food.
My view is that this rule of thumb seems a bit dodgy to say the least. This may make me sound like a bit of a heretic, but let me explain using an analogy! We use synthetic medicines to keep human bodies healthy and in the vast majority of cases these days we have sufficient scientific knowledge to make these medicines safe. If we were to use natural remedies to cure our ailments they would be in general not as effective. This is they key, because the same is true of plants. In general natural pesticides and fertilisers are much, much less effective than synthetic types. Don’t forget, this is not the 50s and we don’t use DDT anymore; agrochemical companies spend literally millions each year testing their products and ensuring they will do us no harm and that they are so effective that the amount needed for several hectares of lands can be quoted in grams.
With the global credit crunch and the fact that in lots of regions of the world there are clearly not enough crops to feed the population, is such a wasteful method of farming as organic really ethical? The only advantage it appears to yield is a vague warm feeling that when we pay that extra 50p for our carrots we are somehow doing the environment good and getting healthier produce. Is it worth it?
Ok, so I’ve given a pretty negative view of organic farming, and it’s true that conventional methods aren’t exactly perfect either. The fact that they create monocultures which reduce biodiversity is clearly not their most redeeming feature. And of course it takes many a long year, and lot of money and a lot of energy to take a pesticide or herbicide from conception to market, so in the process of making farming more efficient in this way we are also stamping down with a large to, frankly, enormous sized carbon footprint.
Read more about organic food and see what the Soil Association have to say for themselves at http://www.soilassociation.org/
Wednesday, 3 December 2008
Time for Richard Dawkins
Richard dawkins is this week, retiring from his post as Charles Simonyi (in case you're wondering, a very rich man who used to work for Microsoft) chair for the Public Understnding of Science at Oxford. What follows is therefore quite an apt article, which I wrote earlier this autmn.
Time for Richard Dawkins
Eminent and prolific, Professor Richard Dawkins has been a symbol for all that is scientific, intelligent and English for many a year now. The scientific community will wish him well this year, as he reaches the age for mandatory retirement from his post as Simonyi chair for public understanding of science at the University of Oxford.
In the aftermarth of his most recent and controversial (to say the least) book, ‘The God Delusion’ how can us lesser intellectual mortals engage with what has become known as the Oxford God debate? As I have considered this question I have begun to ask, are there some things which science - and even Richard Dawkins - will simply never be able to explain?
Firstly, it is imperative that we have a sound grasp of what it means to a scientist to explain something. For example, it is apparent that we exist in a universe with extremely complex laws of nature, intelligent life and beauty in many places. How do we explain the fact that it exists at all? What this boils down to in a scientific sense is that the existence of our universe is improbable without a cause. So any explaining theory which makes the existence more probable is initially a reasonable one. The best thing science has got at the moment is the Darwinian theory of evolution, as propounded by Dawkins in books like ‘The Blind Watchmaker’ .
The Watchmaker analogy says that something with complex inner workings such as a watch (or a human person) is so complex that it necessitates a designer; a watchmaker (or a God). The Blind Watchmaker theory, as explained so eloquently by Richard Dawkins, postulates that if the watchmaker was blind (i.e. not an intelligent being) a watch might still eventually get finished if the watchmaker (evolution) was allowed enough time to try lots of different combinations. This makes life on earth seem much more probable – in fact, given that the time period is something like one billion years I makes it almost certain – and so is a very good scientific theory.
Theologians, for the most part, accept this as good science, and most likely the truth. Where some experts disagree with Dawkins is where the universe came from in the first place. Professor Dawkins believes that something as complicated as our universe must require a cause which is at least as complicated as the universe. It can never, in his view, therefore be an explanation because it in turn requires an even more complex explanation. Was God created by a super-God and he in turn by a hyper-God?
Ex-Regius professor of Divinity at Oxford University, Keith Ward, points out in his recent work that this paradox does not really exist. Since physicists agree the Universe is composed of not merely space, but Steven Hawkin-esque ‘space-time’ then this, surely, is what God (if we suppose for a minute that there is one) must have created. If God created time, then it is clear he can not have a cause – the question, ‘what came before God then?’ has no meaning when we take time out of the equation.
The more we discover about the laws which govern our Universe the harder it becomes for us to conduct experiments to test our theories and make our observations – the switching on of the Large Hadron Collider in Geneva has had to be delayed over until after winter; unfortunately if they switched it the super conducting magnets on now they would suck enough power out of the French national grid that the French would have nothing left to heat their homes!
Realistically speaking we are beginning to reach the boundary of testable science when we deal with Bosons and quarks.
Are these the lengths we have to go to in order to get answers about cutting edge science? Let us not give up the search for understanding, but let us also be humble enough to admit that we may simply not have the capacity to make the measurements necessary to uncover the innermost secrets of the cosmos.
Time for Richard Dawkins
Eminent and prolific, Professor Richard Dawkins has been a symbol for all that is scientific, intelligent and English for many a year now. The scientific community will wish him well this year, as he reaches the age for mandatory retirement from his post as Simonyi chair for public understanding of science at the University of Oxford.
In the aftermarth of his most recent and controversial (to say the least) book, ‘The God Delusion’ how can us lesser intellectual mortals engage with what has become known as the Oxford God debate? As I have considered this question I have begun to ask, are there some things which science - and even Richard Dawkins - will simply never be able to explain?
Firstly, it is imperative that we have a sound grasp of what it means to a scientist to explain something. For example, it is apparent that we exist in a universe with extremely complex laws of nature, intelligent life and beauty in many places. How do we explain the fact that it exists at all? What this boils down to in a scientific sense is that the existence of our universe is improbable without a cause. So any explaining theory which makes the existence more probable is initially a reasonable one. The best thing science has got at the moment is the Darwinian theory of evolution, as propounded by Dawkins in books like ‘The Blind Watchmaker’ .
The Watchmaker analogy says that something with complex inner workings such as a watch (or a human person) is so complex that it necessitates a designer; a watchmaker (or a God). The Blind Watchmaker theory, as explained so eloquently by Richard Dawkins, postulates that if the watchmaker was blind (i.e. not an intelligent being) a watch might still eventually get finished if the watchmaker (evolution) was allowed enough time to try lots of different combinations. This makes life on earth seem much more probable – in fact, given that the time period is something like one billion years I makes it almost certain – and so is a very good scientific theory.
Theologians, for the most part, accept this as good science, and most likely the truth. Where some experts disagree with Dawkins is where the universe came from in the first place. Professor Dawkins believes that something as complicated as our universe must require a cause which is at least as complicated as the universe. It can never, in his view, therefore be an explanation because it in turn requires an even more complex explanation. Was God created by a super-God and he in turn by a hyper-God?
Ex-Regius professor of Divinity at Oxford University, Keith Ward, points out in his recent work that this paradox does not really exist. Since physicists agree the Universe is composed of not merely space, but Steven Hawkin-esque ‘space-time’ then this, surely, is what God (if we suppose for a minute that there is one) must have created. If God created time, then it is clear he can not have a cause – the question, ‘what came before God then?’ has no meaning when we take time out of the equation.
The more we discover about the laws which govern our Universe the harder it becomes for us to conduct experiments to test our theories and make our observations – the switching on of the Large Hadron Collider in Geneva has had to be delayed over until after winter; unfortunately if they switched it the super conducting magnets on now they would suck enough power out of the French national grid that the French would have nothing left to heat their homes!
Realistically speaking we are beginning to reach the boundary of testable science when we deal with Bosons and quarks.
Are these the lengths we have to go to in order to get answers about cutting edge science? Let us not give up the search for understanding, but let us also be humble enough to admit that we may simply not have the capacity to make the measurements necessary to uncover the innermost secrets of the cosmos.
Friday, 21 November 2008
'Modelling the Cell' article
What follows is the original text I wrote for an article recently published in Chemistry Review, a magazine published at the University of York, and aimed at A Level Chemistry students. I don;t know an awful lot about pharmacological research really, I just thought I'd have a go at writing something after I did a summer research project on a topic related to that discussed in the article. The only problem is I haven't had time to sort out how to add in the diagrams yet! Watch this space..
Modelling the Cell
Disease is still a serious problem in the 21st century. Statistics now show that one in three people will develop cancer at some point during their lives. Chemists have always played an important role in developing drugs to combat disease and they continue to do so in modern science.
Have you ever considered how easy it is just to swallow a pill to relieve a headache or clear your sinuses? It’s very convenient for us to take drugs orally; imagine if you had to inject a syringe-full of paracetamol into your forehead each time you had a headache! Medicinal chemists have worked long and hard to design drugs which can enter our gut, pass through our cell membranes, dissolve in the blood and arrive at the site of action in large enough concentrations to actually do some good. It’s quite amazing when you think about it. For example, one problem that faces chemists is that the pH of the stomach is generally between about 1 and 3, depending on what’s been eaten recently. Unfortunately, lots of potentially great drugs contain acid-sensitive functional groups; under such highly acidic conditions amine groups will become protonated and esters may be hydrolysed. These are serious problems as the structure, and therefore the activity, of the drug is altered. The job of the medicinal chemist is to find ways of getting round these problems.
Polymer therapeutics
Diseases like cancer and HIV/AIDS present us with fresh challenges and in some cases polymer therapeutics could offer a way forward. Polymer therapeutics comprise a series of medicines where a drug molecule and a polymer are combined. In essence, the advantage of these medicines is that the polymer wraps around a drug molecule and protects it from degradation, such as via the stomach acid we just mentioned. One obvious property of such polymer-drug conjugates is their large size – much larger than standard drug molecules because of the polymer chains wrapped around them – leading to their other, somewhat more fashionable name nanomedicines. It has recently been shown that tumours produce large amounts of permeability factors, (compounds which make the lining of blood vessels more permeable). This essentially means that the blood vessels surrounding a malignant tumour are permeable to nano-sized molecules, whereas those surrounding healthy tissue are not. Polymer-drug conjugates are therefore a promising candidate for new anticancer medicines because they will differentiate between healthy tissue and cancerous tissue. The drug will accumulate at high concentrations at the tumour site, providing a two-fold advantage; it will be more effective in dealing with the tumour and will yield fewer side effects in healthy tissue.
Safe polymers?
Before we begin doling out spoonfuls of polymer to hopeful patients we must, of course, be convinced that the polymers of interest are not toxic to humans! One important way to look at this is to study the interaction of polymers with cell membranes – if the polymers disrupt the membrane, the contents will begin to spill out and the cell will die. Cytotoxicity studies (where the analyte is introduced to a cell culture) are performed by biologists and these are a very realistic way of looking at toxicity of a compound. Cell culture studies often give us simply a ‘yes’ or ‘no’ to questions about toxicity. If we want to understand the biophysical interactions taking place the Langmuir technique can be very useful. It allows us to create a simplified model of cell membranes and investigate which types of molecule have an interaction with them. Using a modelling technique has the advantage that one can control parameters such as pH and see their individual effect on polymer – lipid interactions. So while the Langmuir technique is a simplification (it does not take into account the copious protein channels and other moieties on the cell surface for example) it is an invaluable tool.
In real life, cell membranes are composed of a bi-layer of phospholipid molecules. The Langmuir technique enables chemists to create reasonable model of this; a uniform phospholipid monolayer. The phospholipids are not miscible with water (i.e. they do not mix) because they are amphiphillic. This means their hydrophobic tails stick up out of the water and their hydrophilic heads are aligned side by side in the surface of the water (see box 1). Once the monolayer is stable the polymer is injected through the bottom of the shallow trough into the buffer solution. Troughs are equipped with a special surface pressure sensor, so any changes in surface pressure can be measured. These give us indications about what is happening on a molecular level. For example, if there is an increase in surface pressure it indicates that the polymer is inserting itself into the monolayer, forcing the lipid molecules apart. If we did see an increase in surface pressure, it might lead us to conclude that were the polymer interacting with a real cell, it would be interfering with the cell membrane which could cause irreparable damage. Of course further studies would be needed, but the technique gives us a insightful starting point.
Dendrimers; the next generation of polymer therapeutics
Box 2 shows PEG and PEI two polymers used in drug conjugates at the moment. These are both certified as non-toxic and approved for use in humans. Of course the search for new drugs is on-going and an impressive new candidate for use in polymer therapeutics is a class of compound known as dendrimers. These novel polymers are monodisperse, spherical polymers grown outwards from a central core. Their hollow core area has the potential to act as a protective storage area for drug molecules. Their monodispersity makes them an attractive candidate as a delivery vector too, as it means their action in the body can be more easily and accurately predicted. The functionalised arms however mean that they are likely to have an interesting and novel interaction with cell membranes. The Langmuir technique will be one way in which chemists try to discover the nature of this interaction. if dendrimers are not toxic, we could have the makings of an excellent drug delivery vector.
Modelling the Cell
Disease is still a serious problem in the 21st century. Statistics now show that one in three people will develop cancer at some point during their lives. Chemists have always played an important role in developing drugs to combat disease and they continue to do so in modern science.
Have you ever considered how easy it is just to swallow a pill to relieve a headache or clear your sinuses? It’s very convenient for us to take drugs orally; imagine if you had to inject a syringe-full of paracetamol into your forehead each time you had a headache! Medicinal chemists have worked long and hard to design drugs which can enter our gut, pass through our cell membranes, dissolve in the blood and arrive at the site of action in large enough concentrations to actually do some good. It’s quite amazing when you think about it. For example, one problem that faces chemists is that the pH of the stomach is generally between about 1 and 3, depending on what’s been eaten recently. Unfortunately, lots of potentially great drugs contain acid-sensitive functional groups; under such highly acidic conditions amine groups will become protonated and esters may be hydrolysed. These are serious problems as the structure, and therefore the activity, of the drug is altered. The job of the medicinal chemist is to find ways of getting round these problems.
Polymer therapeutics
Diseases like cancer and HIV/AIDS present us with fresh challenges and in some cases polymer therapeutics could offer a way forward. Polymer therapeutics comprise a series of medicines where a drug molecule and a polymer are combined. In essence, the advantage of these medicines is that the polymer wraps around a drug molecule and protects it from degradation, such as via the stomach acid we just mentioned. One obvious property of such polymer-drug conjugates is their large size – much larger than standard drug molecules because of the polymer chains wrapped around them – leading to their other, somewhat more fashionable name nanomedicines. It has recently been shown that tumours produce large amounts of permeability factors, (compounds which make the lining of blood vessels more permeable). This essentially means that the blood vessels surrounding a malignant tumour are permeable to nano-sized molecules, whereas those surrounding healthy tissue are not. Polymer-drug conjugates are therefore a promising candidate for new anticancer medicines because they will differentiate between healthy tissue and cancerous tissue. The drug will accumulate at high concentrations at the tumour site, providing a two-fold advantage; it will be more effective in dealing with the tumour and will yield fewer side effects in healthy tissue.
Safe polymers?
Before we begin doling out spoonfuls of polymer to hopeful patients we must, of course, be convinced that the polymers of interest are not toxic to humans! One important way to look at this is to study the interaction of polymers with cell membranes – if the polymers disrupt the membrane, the contents will begin to spill out and the cell will die. Cytotoxicity studies (where the analyte is introduced to a cell culture) are performed by biologists and these are a very realistic way of looking at toxicity of a compound. Cell culture studies often give us simply a ‘yes’ or ‘no’ to questions about toxicity. If we want to understand the biophysical interactions taking place the Langmuir technique can be very useful. It allows us to create a simplified model of cell membranes and investigate which types of molecule have an interaction with them. Using a modelling technique has the advantage that one can control parameters such as pH and see their individual effect on polymer – lipid interactions. So while the Langmuir technique is a simplification (it does not take into account the copious protein channels and other moieties on the cell surface for example) it is an invaluable tool.
In real life, cell membranes are composed of a bi-layer of phospholipid molecules. The Langmuir technique enables chemists to create reasonable model of this; a uniform phospholipid monolayer. The phospholipids are not miscible with water (i.e. they do not mix) because they are amphiphillic. This means their hydrophobic tails stick up out of the water and their hydrophilic heads are aligned side by side in the surface of the water (see box 1). Once the monolayer is stable the polymer is injected through the bottom of the shallow trough into the buffer solution. Troughs are equipped with a special surface pressure sensor, so any changes in surface pressure can be measured. These give us indications about what is happening on a molecular level. For example, if there is an increase in surface pressure it indicates that the polymer is inserting itself into the monolayer, forcing the lipid molecules apart. If we did see an increase in surface pressure, it might lead us to conclude that were the polymer interacting with a real cell, it would be interfering with the cell membrane which could cause irreparable damage. Of course further studies would be needed, but the technique gives us a insightful starting point.
Dendrimers; the next generation of polymer therapeutics
Box 2 shows PEG and PEI two polymers used in drug conjugates at the moment. These are both certified as non-toxic and approved for use in humans. Of course the search for new drugs is on-going and an impressive new candidate for use in polymer therapeutics is a class of compound known as dendrimers. These novel polymers are monodisperse, spherical polymers grown outwards from a central core. Their hollow core area has the potential to act as a protective storage area for drug molecules. Their monodispersity makes them an attractive candidate as a delivery vector too, as it means their action in the body can be more easily and accurately predicted. The functionalised arms however mean that they are likely to have an interesting and novel interaction with cell membranes. The Langmuir technique will be one way in which chemists try to discover the nature of this interaction. if dendrimers are not toxic, we could have the makings of an excellent drug delivery vector.
Subscribe to:
Posts (Atom)