About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
In the Pipeline:
Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline
June 18, 2013
I've been meaning for some time to put up some new photos on the site, what with the original one being over ten years old by now. So here we are - a progress through time. The beard remains a constant, and I still have the T. S. Eliot paperback that I'm reading in the 1983 shot. That's an old flash column next to me, drying out because I was too lazy to clean it out, and some TLC plates on the bench. I was doing carbohydrate chemistry, forming nitrones and doing cycloadditions, and I'm not at all sure that I've ever done a nitrone reaction since!
+ TrackBacks (0) | Category: Blog Housekeeping
Natural products come up around here fairly often, as sources of chemical diversity and inspiration. Here's a paper that combines them with another topic (epigenetics) that's been popular around here as well, even if there's some disagreement about what the word means.
A group of Japanese researchers were looking at the natural products derived from a fungus (Chaetomium indicum). Recent work has suggested that fungi have a lot more genes/enzymes available to make such things than are commonly expressed, so in this work, the team fed the fungus an HDAC inhibitor to kick its expression profile around a bit. The paper has a few references to other examples of this technique, and it worked again here - they got a significantly larger amount of polyketide products out of the fermentation, included several that had never been described before.
There have been many attempts to rejigger the synthetic machinery in natural-product-producing organisms, ranging from changing their diet of starting materials, adding environmental stresses to their culture, all the way to manipulating their actual
genomic sequences directly. This method has the advantage of being easier than most, and the number of potential gene-expression-changing compounds is large. Histone deacetylase inhibitors alone have wide ranges of selectivity against members of the class, and then you have the reverse mechanism (histone actyltranferase), methyltransferase and demethylase inhibitors, and many more. These should be sufficient to produce weirdo compounds a-plenty.
+ TrackBacks (0) | Category: Chemical Biology | Natural Products
Bernard Munos (ex-Lilly, now consulting) is out with a paper reviewing the approved drugs from 2000 to 2012. What's the current state of the industry? Is the upturn in drug approvals over the last two years real, or an artifact? And is it enough to keep things going?
Over that twelve-year span, the average drug approvals ran at 27 per year. Half of all the new drugs were in three therapeutic areas: cancer, infectious disease, and CNS. And as far as mechanisms go, there were about 190 different ones, by Munos' count. The most crowded category was (as might have been guessed) the 17 tyrosine kinase inhibitors, but 85% of the mechanisms were used by only one or two drugs, which is a long tail indeed.
Half those mechanisms were novel - that is, they were not represented by drugs approved before 2000. Coming up behind these first-in-class mechanisms were 29 follow-on drugs during this period, with an average gap of just under three years between the first and second drugs. What that tells you is that the follower programs were started at either about the same time as the first-in-class compounds (and had a slightly longer path through development), or were started at the first opportunity once the other program or mechanism became known. This means that they were started on very nearly the same risk basis as the original program: a three-year gap is not enough to validate much for a new mechanism, other than the fact that another organization thinks that it's worth working on, too. (Don't laugh at that one - there are research department that seem to live only for this validation, and regard their own first-in-class ideas with fear and suspicion).
Overall, though, Munos says that that fast-follower approach doesn't seem to be very effective, or not any more, given that few targets seem to be yielding more than one or two drugs. And as just mentioned, the narrow gap between first and second drugs also suggests that the risk-lowering effect of this strategy isn't very impressive, either.
Here's another interesting/worrisome point:
The long tail (of the mode-of-action curve). . . suggests that pharmaceutical innovation is a by-product of exploration, and not the result of pursuing a limited set of mechanisms, reflecting, for instance, a company’s marketing priorities. Put differently, there does not seem to be enough mechanisms able to yield multiple drugs, to support an industry. . .The last couple of years have seen an encouraging rise in new drug approvals, including many based on novel modes of action. However that surge has benefited companies unequally, with the top 12 pharmaceutical companies only garnering 25 out of 68 NMEs (37%). This is not enough to secure their future.
Looking at what many (most?) of the big companies are going through right now, it's hard to argue with that point of view. The word "secure" does not appear within any short character length of "future" when you look through the prospects for Lilly, AstraZeneca, and others.
Note also that part about how what a drug R&D operation finds isn't necessarily what it was looking for. That doesn't mesh well with some models of managment:
The drug hunter’s freedom to roam, and find innovative translational opportunities wherever they may lie is an essential part of success in drug research. This may help explain the disappointing performance of the programmatic approaches to drug R&D, that have swept much of the industry in the last 15 years. It has important managerial implications because, if innovation cannot be ordained, pharmaceutical companies need an adaptive – not directive – business model.
But if innovation cannot be ordained, why does a company need lots of people in high positions to ordain it, each with his or her own weekly meeting and online presentations database for all the PowerPoint slides? It's a head-scratcher of a problem, isn't it?
+ TrackBacks (0) | Category: Drug Development | Drug Industry History
June 17, 2013
The Supreme Court has another ruling that affects the drug industry: FTC v. Actavis took up the question of "pay to delay", the practice of paying generic companies to go away and not challenge a branded drug. Actavis was in the process of bringing a version of Solvay's AndroGel to market, claiming that the Solvay patent was invalid. They won that case, and the FDA approved their generic version, but Solvay turned around and paid them (and Paddock, another generic firm) to not bring any such drug to the market.
The Federal Trade Commission (FTC) filed suit, alleging that re- spondents violated §5 of the Federal Trade Commission Act by unlawfully agreeing to abandon their patent challenges, to refrain from launching their low-cost generic drugs, and to share in Solvay’s monopoly profits. The District Court dismissed the complaint. The Eleventh Circuit concluded that as long as the anticompetitive effects of a settlement fall within the scope of the patent’s exclusionary potential, the settlement is immune from antitrust attack. Noting that the FTC had not alleged that the challenged agreements excluded competition to a greater extent than would the patent, if valid, it affirmed the complaint’s dismissal. It further recognized that if parties to this sort of case do not settle, a court might declare a patent invalid. But since public policy favors the settlement of disputes, it held that courts could not require parties to continue to litigate in order to avoid antitrust liability.
And now the Supreme Court reverses the Eleventh Circuit. The FTC, they hold, should have been given a chance to make its antitrust case. The Court makes a point out of declining to hold such agreements "presumptively unlawful", but gives a guide to breaking them down. There are both patent validity questions and anticompetitive questions involved, they point out, and these are separate issues (and because of that, they might not take forever to litigate, as the Eleventh Circuit decision worried about). Besides, as the justices note, a sudden large payment in such a case could be a reasonable indication of the underlying patent's validity (and chances of holding up to a determined challenge). The Hatch-Waxman Act has a generally pro-competitive bent to it, and that should operate in this situation as well.
I think this is the decision that most people expected (it's certainly the one I did). Pay-to-delay has always had an antitrust-violation smell to it. The Supreme Court has now gone on record as saying that this scent may well be no illusion, and at the very least, the FTC should be able to make a case if it can. I suspect that we're going to see fewer of these deals now - perhaps none at all - because I doubt many of them would hold up.
+ TrackBacks (0) | Category: Patents and IP | Regulatory Affairs
That's my take-away from this paper, which takes a deep look at a reconstituted beta-adrenergic receptor via fluorine NMR. There are at least four distinct states (two inactive ones, the active one, and an intermediate), and the relationships between them are different with every type of ligand that comes in. Even the ones that look similar turn out to have very different thermodynamics on their way to the active state. If you're into receptor signaling, you'll want to read this one closely - and if you're not, or not up for it, just take away the idea that the landscape is not a simple one. As you'd probably already guessed.
Note: this is a multi-institution list of authors, but it did catch my eye that David Shaw of Wall Street's D. E. Shaw does make an appearance. Good to see him keeping his hand in!
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | In Silico
Compound aggregation is a well-known problem in biochemical assays (although if you go back a few years, that certainly wasn't the case). Some small molecules will start to bunch up under some assay conditions, and instead of your target protein getting inhibited by a single molecule of your test compound, the protein could look as if it's been inhibited by virtue of being dragged into a huge sticky clump of Test Compound Aggregate.
A group at Boehringer Ingleheim has a paper out in J. Med. Chem. suggesting a simple NMR readout to see if a given compound is showing aggregation behavior. It looks useful, but there's one thing I would add to it. The authors mention that they used a simple sodium phosphate buffer for their experiments, and that similar trends were observed in others (for a "limited set of compounds"). But I've heard Tony Giannetti of Genentech speak on this subject before (with reference to his specialty, surface plasmon resonance assays), and he's been pretty adamant about how situation-dependent aggregation can be.
The Shoichet lab's "Aggregator Advisor" page agrees. My worry is that some people might read this new paper and be tempted to clean their screening sets out up front, but you could throw some useful compounds out that way. But aggregation, annoyingly, appears to be a case-by-case thing. Probably the best ways to guard against it are (1) see if your assay can be run with detergent in it to start with, and be prepared to vary the amount, and (2) take your screening hits of interest and check them out individually before you decide that you're on to something. This new NMR assay would be a good way to do that, using the buffer that your screen was run in.
Another note that comes up in all discussions of aggregators is that while many of them are condition-specific, others have a wider range. Many "frequent hitter" compounds turn out to aggregate under a variety of conditions. In that case (because you've got empirical data from your own assays), it's really worth going back and flagging those things. It would seem worthwhile to go through any screening collection and pitch out the individual compounds that show up time and time again, since these are surely less likely to lead to anything useful. Some of these will, on closer inspection, turn out to be promiscuous aggregators, but there are other mechanisms for nastiness as well. In extreme cases, whole structural motifs should be given the fishy eye.
+ TrackBacks (0) | Category: Drug Assays
June 14, 2013
I've heard from more than one source that Roger Perlmutter has been shaking things up this week at Merck. Since he only took over R&D in March, that's a pretty short lag time - if these reports are accurate, he clearly has some strong opinions and is ready to act on them. From what I've been hearing, bench-level people aren't being affected. It's all in the managerial levels. Anyone with more knowledge and a willingness to share it is welcome to do so in the comments. . .
+ TrackBacks (0) | Category: Current Events
Via Stuart Cantrill on Twitter, I see that UK Prime Minister David Cameron is prepared to announce a prize for anyone who can "identify and solve the biggest problem of our time". He's leaving that open, and his examples are apparently ". . .the next penicillin, aeroplane or world wide web".
I like the idea of prizes for research and invention. The thing is, the person who invents the next airplane or World Wide Web will probably do pretty well off it through the normal mechanisms. And it's worth thinking about the very, very different pathways these three inventions took, both in their discovery and their development. While thinking about that, keep in mind the difference between those two.
The Wright's first powered airplane, a huge step in human technology, was good for carrying one person (lying prone) for a few hundred yards in a good wind. Tim Berners-Lee's first Web page, another huge step, was a brief bit of code on one server at CERN, and mostly told people about itself. Penicillin, in its early days, was famously so rare that the urine of the earliest patients was collected and extracted in order not to waste any of the excreted drug. And even that was a long way from Fleming's keen-eyed discovery of the mold's antibacterial activity. A more vivid example than penicillin of the need for huge amounts of development from an early discovery is hard to find.
And how does one assign credit to the winner? Many (most) of these discoveries take a lot of people to realize them - certainly, by the time it's clear that they're great discoveries. Alexander Fleming (very properly) gets a lot of credit for the initial discovery of penicillin, but if the world had depended on him for its supply, it would have been very much out of luck. He had a very hard time getting anything going for nearly ten years after the initial discovery, and not for lack of trying. The phrase "Without Fleming, no Chain; without Chain, no Florey; without Florey, no Heatley; without Heatley, no penicillin" properly assigns credit to a lot of scientists that most people have never heard of.
Those are all points worth thinking about, if you're thinking about Cameron's prize, or if you're David Cameron. But that's not all. Here's the real kicker: he's offering one million pounds for it ($1.56 million as of this morning). This is delusional. The number of great discoveries that can be achieved for that sort of money is, I hate to say, rather small these days. A theoretical result in math or physics might certainly be accomplished in that range, but reducing it to practice is something else entirely. I can speak to the "next penicillin" part of the example, and I can say (without fear of contradiction from anyone who knows the tiniest bit about the subject) that a million pounds could not, under any circumstances, tell you if you had the next penicillin. That's off by a factor of a hundred, if you just want to take something as far as a solid start.
There's another problem with this amount: in general, anything that's worth that much is actually worth a lot more; there's no such thing as a great, world-altering discovery that's worth only a million pounds. I fear that this will be an ornament around the neck of whoever wins it, and little more. If Cameron's committee wants to really offer a prize in line with the worth of such a discovery, they should crank things up to a few hundred million pounds - at least - and see what happens. As it stands, the current idea is like me offering a twenty-dollar bill to anyone who brings me a bar of gold.
+ TrackBacks (0) | Category: Current Events | Drug Industry History | Infectious Diseases | Who Discovers and Why
The brutal drumbeat of Alzheimer's clinical failure continues at Eli Lilly. After the Phase III failure of their gamma-secretase inhibitor semagacestat, and a delusional attempt to pretend that the anti-amyloid antibody solanezumab succeeded, now comes word that the company has halted studies of a beta-secretase inhibitor.
This one wasn't for efficacy, but for tox. The company says that LY2886721 led to abnormalities in liver function, which is the sort of thing that can happen to anyone in Phase II. There is that thioamidine thing in it, but overall, it's not a bad-looking compound, particularly by the standards of beta-secretase inhibitors. But what does that avail one? We'll never find out what this one would have done in a real Phase III trial, although (unfortunately) I know how I'd lay the odds, considering what we know about Alzheimer's drug in the clinic. Beta-secretase inhibitors are an even higher-stakes bet than usual in this field, because mechanistically they have pretty strong support when it comes to inhibiting the buildup of amyloid protein, but they also have clear mechanistic liabilities: the enzyme seems to be important in the formation of myelin sheaths, which is not the sort of thing you'd want to touch in a patient population that's already neurologically impaired. Which effect wins out in humans? Does a BACE inhibitor really lower amyloid in the clinic? And does lowering amyloid in this way really affect the progression of Alzheimer's disease? Extremely good questions, all of those, and the only way to answer them is to round up a plausible drug candidate (not so easy for this target), half a billion dollars (for starters) and try it out.
This failure makes Lilly perhaps the first company to achieve a dread milestone, the Amyloid Trifecta. They have now wiped out on beta-secretase, on gamma-secretase, and on antibody therapy. And you know, I have to salute them for it. They've been making a determined effort against a terrible disease, trying all the most well-founded means of attack, and they're getting hammered into the ground like a tent peg for it. Alzheimer's. At the rate things are going, Lilly is going to end up in a terrible position, and a lot of it has to do with battering themselves against Alzheimer's. Remember this next time someone tells you about how drug companies are just interested in ripping off each other's baldness cures or something.
+ TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials
June 13, 2013
Just a little while ago, the Supreme Court issued a unanimous decision (rare these days) in the Myriad Genetics case. I summarized the state of play up until the most recent arguments here, and if you're just getting up to speed on this issue, I'd read that post first. There are a lot of things this case is not about, and there are a lot of headlines that are going to mess things up. I would not be surprised to see "Myriad Wins" and "Myriad Loses" coming up at the same time in a news search.
Here's the actual decision (PDF), and here's the key statement:
A naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated, but cDNA is patent eligible because it is not naturally occurring.
The earlier appeals court decision was broader, and found that isolated stretches of otherwise natural DNA were, in fact, patent-eligible, because they are not found as such (unwound, de-histoned, cleaved at both ends) in nature. But this ruling dials that back a bit. A cDNA, stripped of introns, etc., is indeed a work of human ingenuity, and is patent-eligible (as indeed, it had been considered to be before this decision). Here's more:
It is important to note what is not implicated by this decision. First, there are no method claims before this Court. Had Myriad created an innovative method of manipulating genes while searching for the BRCA1 and BRCA2 genes, it could possibly have sought a method pat- ent. But the processes used by Myriad to isolate DNA were well understood by geneticists at the time of Myriad’s patents “were well understood, widely used, and fairly uniform insofar as any scientist engaged in the search for a gene would likely have utilized a similar approach,” 702 F. Supp. 2d, at 202–203, and are not at issue in this case.
Similarly, this case does not involve patents on new applications of knowledge about the BRCA1 and BRCA2 genes. Judge Bryson aptly noted that, “[a]s the first party with knowledge of the [BRCA1 and BRCA2] sequences, Myriad was in an excellent position to claim applications of that knowledge. Many of its unchallenged claims are limited to such applications.” 689 F. 3d, at 1349. Nor do we consider the patentability of DNA in which the order of the naturally occurring nucleotides has been altered. Scientific alteration of the genetic code presents a different inquiry, and we express no opinion about the application of §101 to such endeavors. We merely hold that genes and the information they encode are not patent eligible under §101 simply because they have been isolated from the surrounding genetic material.
Unfortunately, many of the news blurbs on this issue are smudging these questions around. I don't actually expect this ruling to have much effect, to be honest, except as a way to help resolve the question of whether stretches of raw DNA are patentable. The glory days of trying to patent such things are long gone, in any case. And since there are many more useful forms which are patentable, any headlines about "No patents for DNA!" are misleading.
+ TrackBacks (0) | Category: Patents and IP
Single-molecule techniques are really the way to go if you're trying to understand many types of biomolecules. But they're really difficult to realize in practice (a complaint that should be kept in context, given that many of these experiments would have sounded like science fiction not all that long ago). Here's an example of just that sort of thing: watching DNA polymerase actually, well, polymerizing DNA, one base at a time.
The authors, a mixed chemistry/physics team at UC Irvine, managed to attach the business end (the Klenow fragment) of DNA Polymerase I to a carbon nanotube (a mutated Cys residue and a maleimide on the nanotube did the trick). This give you the chance to use the carbon nanotube as a field effect transistor, with changes in the conformation of the attached protein changing the observed current. It's stuff like this, I should add, that brings home to me the fact that it really is 2013, the relative scarcity of flying cars notwithstanding.
The authors had previously used this method to study attached lysozyme molecules (PDF, free author reprint access). That second link is a good example of the sort of careful brush-clearing work that has to be done with a new system like this: how much does altering that single amino acid change the structure and function of the enzyme you're studying? How do you pick which one to mutate? Does being up against the side of a carbon nanotube change things, and how much? It's potentially a real advantage that this technique doesn't require a big fluorescent label stuck to anything, but you have to make sure that attaching your test molecule to a carbon nanotube isn't even worse.
It turns out, reasonably enough, that picking the site of attachment is very important. You want something that'll respond conformationally to the actions of the enzyme, moving charged residues around close to the nanotube, but (at the same time) it can't be so crucial and wide-ranging that the activity of the system gets killed off by having these things so close, either. In the DNA polymerase study, the enzyme was about 33% less active than wild type.
And the authors do see current variations that correlate with what should be opening and closing of the enzyme as it adds nucleotides to the growing chain. Comparing the length of the generated DNA with the FET current, it appears that the enzyme incorporates a new base at least 99.8% of the time it tries to, and the mean time for this to happen is about 0.3 milliseconds. Interestingly, A-T pair formation takes a consistently longer time than C-G does, with the rate-limiting step occurring during the open conformation of the enzyme in each case.
I look forward to more applications of this idea. There's a lot about enzymes that we don't know, and these sorts of experiments are the only way we're going to find out. At present, this technique looks to be a lot of work, but you can see it firming up before your eyes. It would be quite interesting to pick an enzyme that has several classes of inhibitor and watch what happens on this scale.
It's too bad that Arthur Kornberg, the discoverer of DNA Pol I, didn't quite live to see such an interrogation of the enzyme; he would have enjoyed it very much, I think. As an aside, that last link, with its quotes from the reviewers of the original manuscript, will cheer up anyone who's recently had what they thought was a good paper rejected by some journal. Kornberg's two papers only barely made it into JBC, but one year after a referee said "It is very doubtful that the authors are entitled to speak of the enzymatic synthesis of DNA", Kornberg was awarded the Nobel for just that.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | The Scientific Literature
June 12, 2013
Here's a neat bit of reaction optimization from the Aubé lab at Kansas. Update: left the link out before - sorry!) They're trying to make one of their workhorse reactions, the intramolecular Schmidt, a bit less nasty by cutting down on the amount of acid catalyst. The problem with that is product inhibition: the amide that's formed in the reaction tends to vacuum up any Lewis acid around, so you've typically had to use that reagent in excess, which is not a lot of fun on scale.
By varying a number of conditions, they've found a new catalyst/solvent system that's quite a bit friendlier. I keep meaning to try some of these reactions out (they make some interesting molecular frameworks), and maybe this is my entry into them. But the general problem here is one that every working organic chemist has faced: reactions that, for whatever reason, stop partway through. In this situation, there's at least a reasonably hypothesis why things grind out, and there's always been a less-than-elegant way around it (dump in more Lewis acid).
I'm sure, though, that everyone out there at the bench has had reactions that just. . .stop, for reasons unknown, and can't be pushed forward by addition of more anything. I've always wondered what's going on in those situations (probably a lot of things, from case to case), and they're always a reminder of just how little we sometimes really understand about what's going on inside our reaction flasks. Aggregates or other supramolecular complexes? Solubility problems? Adsorption onto heterogeneous reactants? Getting a handle on these things isn't easy, and most people don't bother doing it, unless they're full-out process chemists in industry.
+ TrackBacks (0) | Category: Chemical News | Life in the Drug Labs
ChemBark has an interesting question here: who's the most respected and influential chemist, among chemists? He was taking nominations on Twitter, and has settled on Roald Hoffman as his choice. Other strong contenders included Nocera, Corey, Whitesides, Sharpless, Kroto, Grubbs, Gray, Hershbach, Zare, and Stoddart. Anyone over here have names to add to the list? Note again that we're talking influence and fame inside the field, because if you go to "among the general public", you pretty much cut everyone out right there, unfortunately. . .
+ TrackBacks (0) | Category: Chemical News
June 11, 2013
Ionic liquids (molten salts at relatively low temperatures) have been a big feature of the chemical literature for the last ten or fifteen years - enough of a feature to have attracted a few disparaging comments here, from me and from readers. There's a good article out now that talks about the early days of the field and how it grew, and it has some food for thought in it.
The initial reports in the field didn't get much attention (as is often the case). What seems to have made things take off was the possibility of replacing organic solvents with reusable, non-volatile, and (relatively) non-toxic alternatives. "Green chemistry" was (and to an extent still is) a magnet for funding, and it was the combination of this with ionic liquid (IL) work that made the field. But not all of this was helpful:
The link with green chemistry during the development of the IL field, propelled both fields forward, but at times the link was detrimental to both fields when overgeneralizations eroded confidence. ILs were originally considered as green since many of these liquid salts possess a negligible vapor pressure and might replace the use of volatile organic solvents known to result in airborne chemical contamination. The reported water stability and non-volatility led to the misconception that these salts were inherently safe and environmentally friendly. This was exacerbated by the many unsubstantiated claims that ILs were ‘green’ in introductions meant to provide the motivation for the study, even if the study itself had nothing to do with green chemistry. While it is true that the replacement of a volatile organic compound (VOC) might be preferred, proper knowledge of the chemistry of the ions must also be taken into account before classifying anything as green. Nonetheless, the statement “Ionic Liquids are green” was widely published (and can still be found in papers published today). Given the number and nature of the possible ions comprising ILs, these statements are similar to “Water is green, therefore all solvents are green.”
There were many misunderstandings at the chemical level as well:
However, just as the myriad of molecular solvents (or any compounds) can have dramatic differences in chemical, physical, and biological properties based on their chemical identity, so too can ILs. With the potential for 10^18 ion combinations, a single crystal structure of one compound is not a good representation of the chemistry of the entire class of salts which melt below 100 °C and would be analogous to considering carbon tetrachloride as a model system for all known molecular solvents.
The realization that hexafluorophosphate counterions can indeed generate HF under the right conditions helped bring a dose of reality back to the field, although (as the authors point out), not without a clueless backlash that decided, for a while, that all ionic liquids were therefore intrinsically toxic and corrosive. The impression one gets is that the field has settled down, and that its practitioners are more closely limited to people who know what they are talking about, rather than having quite so many who are doing it because it's hot and publishable. And that's a good thing.
+ TrackBacks (0) | Category: Chemical News | The Scientific Literature
The accusations of data fabrication at GlaxoSmithKline's China research site are quite real. That's what we get from the latest developments in the case, as reported by BioCentury, Pharmalot, and the news section at Nature Medicine. Jingwu Zang, lead author on the disputed paper and former head of the Shanghai research site, has been dismissed from the company. Other employees are on administrative leave while an investigation proceeds, and GSK has said it has begun the process of retracting the paper itself.
As for what's wrong with the paper in question, BioCentury Extra has this:
GSK said data in a paper published in January 2010 in Nature Medicine on the role of interleukin-7 (IL-7) in autoimmune disease characterized data as the results of experiments conducted with blood cells of multiple sclerosis (MS) patients "when, in fact, the data reported were either the results of experiments conducted at R&D China with normal (healthy donor) samples or cannot be documented at all, suggesting that they well may have been fabricated."
Pharmalot and others also report that GSK is asking all the authors of the paper to sign documents to agree that it be retracted, which is standard procedure at the Nature Publishing Group. If there's disagreement among them, the situation gets trickier, but we'll see what happens.
The biggest questions are unanswered, though, and we're not likely to hear about them except in rumors and leaks. How, for one thing, did this happen in the first place? On whose initiative were results faked? Who was supposed to check up on these results, and was there anything that could have been done to catch this problem earlier? Even more worrying - and you can bet that plenty of people inside GSK are thinking this, too - how many more things have been faked as well? You'd hope that this was an isolated incident, but if someone is willing to whip up a batch of lies like this, they might well be willing to do much more besides.
The involvement of the head of the entire operation (Jingwu Zang) is particularly troubling. Sometimes, in such cases, it turns out that the person at the top just had their name on the paper, but didn't really participate much or even know what was going on. But he's the only person so far in this mess who's been outright fired, which suggests that something larger has happened. We're not going to hear much about it, but you can bet there are some rather worried and upset people digging through this inside GlaxoSmithKline. There had better be.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
June 10, 2013
The topic of making hit compounds, leads, and drug candidates that are less flat/aromatic has come up several times around here, and constantly around the industry. A reader sent along the following question: supposing that you wanted to obtain a decent collection of molecules with a greater-than-normal number of nonaromatic carbons and chiral centers, where would you find them?
Are there some suppliers that have done a better job than others of rising to the demand for this sort of thing? If anyone has nominations for good sources, or for places that are at least showing signs of moving in that direction, they'd be welcome. My guess is that fragment-sized molecules would be a good place to start, since they're (presumably) more synthetically accessible, and have advantages in the amount of chemical space that can be covered per number of compounds, but all comers will be considered. . .
+ TrackBacks (0) | Category: Chemical News
Nature Medicine has an update on the deuterated drug landscape. There are several compounds in the clinic, and the time to the first marketed deuterium-containing drug is surely counting down.
But, as mentioned at the end of that piece, another countdown that also must be ticking away is the one to the first lawsuit. There are several places where one could be fought out. The deuterated-drug landscape was the subject of a vigorous early land rush, and there are surely overlapping claims out there which will have to be sorted out if (when) the money starts to flow from the idea. And there's the whole problem of obviousness, a key patent-killer. The tricky thing is, standards of what is obvious to one skilled in the art change over time. They have to change; the art changes. (I'll risk some more gritted teeth among the readership by breaking into Latin again: Tempora mutantur, nos et mutamur in illis.
We've already seen this with respect to single enantiomers - it's now considered obvious to resolve a racemic mixture, an to expect that the two isomers will have different activities as pharmaceuticals. At what point will it be considered obvious that deuteration can improve the pharmacokinetics? If that does ever happen, it'll take longer, because deuteration is not as simple a process as resolution of a racemate. Itt can be difficlut (and, well, non-obvious) to figure out where to put the deuteriums for maximum effect, and how many need to be added. Adding them is not always so easy, either, which brings up questions of enablement and reduction to practice. You need to teach toward the compounds you want to claim, and for deuteration, that's going to mean getting pretty specific.
There's another consideration that I hadn't been aware of until this weekend. I had the chance to talk with a patent attorney at a social gathering (not everyone's idea of a big Saturday night, admittedly, but I enjoyed the whole affair). He was explaining to me a consequence of the Supreme Court's recent ruling on obviousness, the 2007 KSR v. Teleflex decision. Apparently, one of the major effects of that ruling was the idea that if there are a limited number of known options for an inventor to choose from, that can take the whole thing into the realm of the obvious. The actual language is that when ". . .there is a design need or market pressure to solve a problem and there are a finite number of identified, predictable solutions, a person of ordinary skill has good reason to pursue the known options within his or her technical grasp. . .the fact that a combination was obvious to try might show that it was obvious under § 103". You can see the PTO itself trying to come to grips with KSR here, and it seems to be very heavily cited indeed by examiners (and in subsequent court cases).
Naturally, as with legal matters, the big question becomes exactly what a limited number of options might mean. How many, exactly, is that? In the case of a racemate, you have two (only two, always two), and it's certainly reasonable to expect them to be different in vivo. So that would come under the KSR principle, I'd say, and it's not just me. But what if there are a limited number of places that a deuterium can be added to a molecule? At what point does deuterating them become, well, just one of those things that a person skilled in the art would know to try?
Expect a court case on this eventually, when some serious money starts to be made in the area. This is going to be fought out case by case, and it's going to take quite a while.
+ TrackBacks (0) | Category: Patents and IP | Pharmacokinetics
June 7, 2013
Reader may remember the sudden demise of science-fraud.org, under threats of legal action. Its author, Paul Brookes, had a steady stream of material pointing out what very much seemed to be altered and duplicated figures in many scientific publications.
Now comes word that the Brazilian researcher (Rai Curi) whose legal threats led to that shutdown has corrected yet another one of his publications. That Retraction Watch link has the details, but I wanted to highlight the corrections involved:
After the publication of this manuscript we observed mistakes in Figures 3A, 4A, and 6A. The representative images related to pAkt (Figure 3A), mTOR total (Figure 4A), and MuRF-1 total (Figure 6A) have been revised. Please note the original raw blots are now provided with the revised Figures as part of this Correction.
In Figure 3A, pAkt panel, the C and CS bands had been duplicated.
In Figure 4A, the bands were re-arranged compared to the original blot.
In Figure 6A, the band for group D was incorrect.
The remaining Figures, results and conclusions are the same as originally reported in the article. The authors apologize for these errors and refer readers to the corrected Figures 3A, 4A, and 6A provided in this Correction.
So I'm certainly glad that Prof. Curi went after a web site that looks for rearranged blots and altered gels. We wouldn't want any of that around. Would we, now.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
Here's a problem with screening collections that I have to admit I wasn't aware of: generation of hydrogen peroxide. This paper (free access) gives an excellent overview of what's going on. Turns out that some compounds can undergo redox-cycling in the presence of the common buffer additive DTT (dithiothreitol - note - fixed brain spasm on earlier name), spitting out, in the end, a constant trickle of peroxide.
Now, for many assays, this might not mean much one way or another. But enzymes with a crucial cysteine residue are another matter. Those can get oxidized, which is irritating in these cases, because DTT is added to such assays just to keep that sort of thing from happening. That link above describes a useful horseradish peroxidase/phenol red assay to detect hydrogen peroxide generation, and its use to profile the NIH's Small Molecule Repository compound collections.
Fortunately, only a limited number of compounds have the ability to hose up your assays in this manner. Of the roughly 196,000 compounds screened, only 37 were true peroxide-generators. Quinones are serial offenders, as any chemist might expect, but if you let you screening collection fill up with quinones you have only yourself to blame. There are less obvious candidates, though: several arylsulfonamides also showed this behavior, and while those aren't everyone's favorite compounds, I'd like to see the large screening set that doesn't have some in there somewhere. It's worth noting, though, that many of the sulfonamides that were identified are also quinon-ish.
So I think the take-home advice here is to be aware if your target is sensitive to this sort of thing. Cysteine proteases are obvious candidates, but Trp can be oxidized, too, and a lot of proteins have crucial disulfides that might get unraveled. Once you've flagged your protein as a concern, be sure to run the hits you get back through this peroxide assay to make sure that you're not being led on. Trying to eliminate compounds by structural class up front is another approach, but the compounds that are first on the list are compounds that you should have trashcanned already.
+ TrackBacks (0) | Category: Drug Assays
The literature access service DeepDyve has made an intriguing announcement of a new service they're offering for non-subscribers of scientific journals. For free, you can have access to the full text. . .for five minutes.
Here's more from the Information Culture blog at Scientific American. Obviously, five minutes is not enough to actually read a journal article, but it probably is enough to decide if you really want to pay to see the thing for real. (And I might note, for chemists and biologists, that five minutes is probably enough time to check a procedure in the experimental section). To that end, it's worth noting that many journals do not seem to put their Supplementary Information files behind their paywalls, and thorough experimental details seem more and more to be showing up in those, rather than the main text.
Note: DeepDyve has access to Elsevier, Wiley, and Royal Society of Chemistry journals, among many others. Nature is in there, but not ScienceBut appears to be no Journal of Biological Chemistry, to pick a heavy hitter on the bio end. And for the less-common chemistry needs, there appears to be no access to Heterocycles or the Journal of Heterocyclic Chemistry, and no Phosphorus, Sulfur, although many other out-of-the-way journals do show up. Update: note also that the American Chemical Society does not seem to be a participant at all. . .
But for people without journal access, this could be the best of a number of not-so-good options. I'll give it a try myself next time I run into some reference in a journal that my own institution doesn't subscribe to, and see how it goes. Thoughts and experiences welcome in the comments. . .
+ TrackBacks (0) | Category: The Scientific Literature
June 6, 2013
The failure at the FDA of Aveo's kinase inhibitor tivozanib has had the expected fallout: the company has cut over half its employees.
I cannot resist linking to Adam Feuerstein's take on this news. If there's a case against his viewpoint, I'd be glad to link to that as well. But for now:
Aveo Oncology (AVEO_) fired 140 middle and lower-level employees -- 62 percent of its workforce -- on Tuesday in order to save money to pay the salaries and bonuses of its top executives who blew up the company, decimated shareholder value and are too cowardly to accept responsibility for their incompetence.
At Aveo, accountability starts at the bottom.
+ TrackBacks (0) | Category: Business and Markets
Update: the story continues to develop. The scientist mentioned below, Jingwu Zang has been dismissed from GSK, and others are under investigation. The paper itself is in the process of being retracted. More here.
This is quite bad. Reports have been circulating that GlaxoSmithKline is investigating the scientists (and the results) behind this 2010 paper in Nature Medicine.
That first link from Pharmalot mentions this thread at the Chinese mitbbs.com site, and similar stuff has been showing up elsewhere. The online speculation is about Jingwu Zang (sometimes appearing as Zhang, the more common transliteration of the name), who was the lead author on the paper. Various postings (from the same person?) claim that Zang has been let go from GSK, and the Biocentury link in the first paragraph says that mail to his corresponding address bounces back.
The paper is (was?) on IL-7's role in autoimmune disease, a perfectly good topic for a drug company research group to be investigating, of course. But now we're going to have to watch to see if any retraction comes out of this - GSK doesn't have to comment on their hiring (and firing) decisions, but I hope that they wouldn't let a fraudulent Nature Medicine paper stand. That's the really disturbing thing about this situation; I'll see if I can explain what I mean.
A critic from outside the drug industry might say "So what? You people publish shady junk all the time? What's another truth-stretching paper, more or less?" Now, I resent implications like that, but at the same time, there have indeed been instances of nasty publication behavior (ghostwriting, etc.), which I deplore. But those things have been driving by the desire to increase sales of approved drugs. They come from overzealous marketing departments clawing for share, trying to get physicians to write for the company's drug over the other choices.
But the further back you go from the elbow-throwing front lines of the market, the less of that stuff you should see. The paper under scrutiny is early-stage research; it could have come from any good lab (academic or industrial) studying T-cell behavior, multiple sclerosis, or autoimmune mechanisms. Frankly, most of the shady stuff (and retractions) in this kind of work come from academia: the viciously competitive front lines of their market are publications in prestigious journals (like Nature Medicine), which directly bear on funding and tenure decisions. Drug companies have an incentive to stretch the truth about how wonderful their current drug is, not about what their scientists have discovered about biochemistry and cell biology. That doesn't bring in any money.
But what a publication like that does bring in, perhaps, is internal prestige. If you're trying to show what a big deal your particular branch of the company is, and what high-quality work they do, this would be one good way to do it. Keep in mind, publications like this are not the primary goal of people in the drug business; it's not like academia. The job of a drug company research group is to increase the number of drugs the company finds, and publishing in a good journal really doesn't have much to do with that. This publication, though, is a way of telling everyone else - other drug companies, other academic and industrial scientists, other departments and higher-ups at GSK, who may or may not know much about immunology per se, that GSK's Shanghai labs do good enough work to get it into Nature Medicine.
And while we're talking about this, let's talk about another widely-held belief about pharma research branches in China. There have, of course, been a number of these opened over the last five or ten years. And there are a lot of good scientists in China, and there are a lot of research topics that are relevant to the needs of a big drug company, so why not? But it's also widely assumed - although this is certainly not written down anywhere - that the Chinese government very much encourages big foreign companies to start such operations in China itself. If you lend your company's internationally known name to an operation in Shanghai (or wherever), if you invest in getting that site going, if you hire a big group of Chinese nationals to work there and manage things. . .well, the Chinese authorities are just going to like you more. Aren't they? And while being liked by the authorities is never a bad thing in any country in the world, particularly in a heavily regulated industry like pharmaceuticals, it is a particularly good thing in some of them.
This is an unfortunate situation. I believe very strongly in a government of laws, not of men - appropriately enough for where I work, that phrase was written by John Adams into the Constitution of Massachusetts. It's an ideal very difficult to realize, particularly since both Massachusetts and the rest of the world are stocked with human beings, but ideals are supposed to be difficult to realize. I understand that personal connections matter all over the world, and that this is by no means always a bad thing. But the bigger and broader the issues, the more important should be the rule of law.
The particular problem of multinational Chinese research institutes, which this current scandal can only worsen, is that too many people can assume that they've been built mainly to satisfy the Chinese government. They suffer, in other words, from the curse of affirmative action (and other such preference programs): the ever-present suspicion that once merit and ability are made secondary, that all bets are off. (This online debate at The Economist does a good job of airing out such concerns). In other words, the government of China could well end up accomplishing the exact reverse of what it's presumably trying to do: instead of elevating Chinese research (and researchers), it could be damaging the reputations of both.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
June 5, 2013
Chiral catalyst reactions seem to show up on both lists when you talk about new reactions: the list of "Man, we sure do need more of those" and the "If I see one more paper on that I'm going to do something reckless" list.
I sympathize with the latter viewpoint, but the former is closer to reality. What we don't need are more lousy chiral catalyst papers, though, on that I think we can all agree. So I wanted to mention a good one, from Erick Carreira's group at the ETH. They're trying something that we're probably going to be seeing more of in the future: a "dual-catalyst" approach:
In a conceptually different construct aimed at the synthesis of compounds with a pair of stereogenic centers, two chiral catalysts employed concurrently could dictate the configuration of the stereocenters in the product. Ideally, these would operate independently and set both configurations in a single transition state with minimal matched/mismatched interactions. Herein, we report the realization of this concept in the development of a method for the stereodivergent dual-catalytic α-allylation of aldehydes.
Shown is a typical reaction scheme. They're doing iridium-catalysed allylation reactions, which are already known via the work of Hartwig and others, but with a chiral catalyst to activate the nucleophilic end of the reaction and a separate one for the electrophilic end. That lets you more or less dial in the stereocenters you want in the product. It looks like the allyl alcohol need some sort of aryl group, although they can get it to work with a variety of those. The aldehyde component can vary more widely.
You'd expect a scheme like this to have some combinations that work great, but other mismatched ones that struggle a bit. But in this case the yields stay at 60 to 80%, and the ee values are >99% across the board as they switch things around, which is why we're reading this in Science rather than in, well, you can fill in the names of some other journals as well as I can. Making a quaternary chiral center next to a tertiary one in whatever configuration you want is not something you see every day.
I think that chiral multi-catalytic systems will be taking up even more journal pages than ever in the future. It really seems like a way to get things to perform, and there's certainly enough in the idea to keep a lot of people occupied for a long time. Those of us doing drug discovery should resist the urge to flip the pages too quickly, too, because if we really mean all that stuff about making more three-dimensional molecules, we're going to have to do better with chirality than "Run it down an SFC and throw half of it away".
+ TrackBacks (0) | Category: Chemical News
If you're in iPad sort of chemist (one of Baran's customers?), you may well already know that app versions of ChemDraw and Chem3D came out yesterday for that platform. I haven't tried them out myself, not (yet) being a swipe-and-poke sort of guy, but at $10 for the ChemDraw app (and Chem3D for free), it could be a good way to get chemical structures going on your own tablet.
Andre the Chemist has a writeup on his experiences here. As an inorganic chemist, he's run into difficulties with text labels, but for good ol' organic structures, things should be working fine. I'd be interested in hearing hands-on reviews of the software in the comments: how does the touch-screen interface work out for drawing? Seems like it could be a good fit. . .
Update: here's a review at MacInChem, and one at Chemistry and Computers.
+ TrackBacks (0) | Category: Chemical News
June 4, 2013
Late last year came word that the AstraZeneca/Rigel compound, fostamatinib, had failed to show any benefit versus AbbVie's Humira in the clinic. Now they've gritted their corporate teeth and declared failure, sending the whole program back to Rigel.
I've lost count of how many late-stage clinical wipeouts this makes for AZ, but it sure is a lot of them. The problem is, it's hard to say just how much of this is drug discovery itself (after all, we have brutal failure rates even when things are going well), how much of it is just random bad luck, or what might be due to something more fundamental about target and compound selection. At any rate, their CEO, Pascal Soriot, has a stark backdrop against which to perform. Odds are, things will pick up, just by random chance if by nothing else. But odds are, that may not be enough. . .
+ TrackBacks (0) | Category: Business and Markets | Clinical Trials
I see that Neil Withers is trying to start up a new discussion in that "Kudzu of Chemistry" comment thread. The main topic is what reactions and chemistry we see too much of, but he's wondering what we should see more of. It's a worthwhile question, but I wonder if it'll be hard to answer. Personally, I'd like to see more reactions that let me attach primary and secondary amines directly into unactivated alkyl CH bonds, but I'm not going to arrange my schedule around that waiting period.
So maybe we should stick with reactions (or reaction types) that have been reported, but don't seem to be used as much as they should. What are the unsung chemistries that should be more famous? What reactions have you seen that you can't figure out why no one's ever followed up on them? I'll try to add some of my own in the comments as the day goes on.
+ TrackBacks (0) | Category: Chemical News
June 3, 2013
Here's a worthwhile paper from Donna Huryn, Lynn Resnick, and Peter Wipf on the academic contributions to chemical biology in recent years. They're not only listing what's been done, they're looking at the pluses and minuses of going after probe/tool compounds in this setting:
The academic setting provides a unique environment distinct from traditional pharmaceutical or biotechnology companies, which may foster success and long-term value of certain types of probe discovery projects while proving unsuitable for others. The ability to launch exploratory high risk and high novelty projects from both chemistry and biology perspectives, for example, testing the potential of unconventional chemotypes such as organometallic complexes, is one such distinction. Other advantages include the ability to work without overly constrained deadlines and to pursue projects that are not expected to reap commercial rewards, criteria and constraints that are common in “big pharma.” Furthermore, projects to identify tool molecules in an academic setting often benefit from access to unique and highly specialized biological assays and/or synthetic chemistry expertise that emerge from innovative basic science discoveries. Indeed, recent data show that the portfolios of academic drug discovery centers contain a larger percentage of long-term, high-risk projects compared to the pharmaceutical industry. In addition, many centers focus more strongly on orphan diseases and disorders of third world countries than commercial research organizations. In contrast, programs that might be less successful in an academic setting are those that require significant resources (personnel, equipment, and funding) that may be difficult to sustain in a university setting. Projects whose goals are not consistent with the educational mission of the university and cannot provide appropriate training and/or content for publications or theses would also be better suited for a commercial enterprise.
Well put. You have to choose carefully (just as commercial enterprises have to), but there are real opportunities to do something that's useful, interesting, and probably wouldn't be done anywhere else. The examples in this paper are sensors of reactive oxygen species, a GPR30 ligand, HSP70 ligands, an unusual CB2 agonist (among other things), and a probe of beta-amyloid.
I agree completely with the authors' conclusion - there's plenty of work for everyone:
By continuing to take advantage of the special expertise resident in university settings and the ability to pursue novel projects that may have limited commercial value, probes from academic researchers can continue to provide valuable tools for biomedical researchers. Furthermore, the current environment in the commercial drug discovery arena may lead to even greater reliance on academia for identifying suitable probe and lead structures and other tools to interrogate biological phenomena. We believe that the collaboration of chemists who apply sound chemical concepts and innovative structural design, biologists who are fully committed to mechanism of action studies, institutions that understand portfolio building and risk sharing in IP licensing, and funding mechanisms dedicated to provide resources leading to the launch of phase 1 studies will provide many future successful case studies toward novel therapeutic breakthroughs.
But it's worth remembered that bad chemical biology is as bad as anything in the business. You have the chance to be useless in two fields at once, and bore people across a whole swath of science. Getting a good probe compound is not like sitting around waiting for the dessert cart to come - there's a lot of chemistry to be done, and some biology that's going to be tricky almost by definition. The examples in this paper should spur people on to do the good stuff.
+ TrackBacks (0) | Category: Chemical Biology
Chemistry, like any other human-run endeavor, goes through cycles and fads. At one point in the late 1970s, it seemed as if half the synthetic organic chemists in the world had made cis-jasmone. Later on, a good chunk of them switched to triquinane synthesis. More recently, ionic liquids were all over the literature for a while, and while it's not like they've disappeared, they're past their publishing peak (which might be a good thing for the field).
So what's the kudzu of chemistry these days? One of my colleagues swears that you can apparently get anything published these days that has to do with a BODIPY ligand, and looking at my RSS journal feeds, I don't think I have enough data to refute him. There are still an awful lot of nanostructure papers, but I think that it's a bit harder, compared to a few years ago, to just publish whatever you trip over in that field. The rows of glowing fluorescent vials might just have eased off a tiny bit (unless, of course, that's a BODIPY compound doing the fluorescing!) Any other nominations? What are we seeing way too much of?
+ TrackBacks (0) | Category: Chemical News | The Scientific Literature
May 31, 2013
For those who are into total synthesis of natural products, Arash Soheili has a Twitter account (Total_Synthesis) that keeps track of all the reports in the major journals. He's emailed me with a link to a searchable database of all these, which brings a lot of not-so-easily-collated information together into one place. Have a look! (Mostly, when I see these, I'm very glad that I'm not still doing them, but that's just me).
+ TrackBacks (0) | Category: Chemical News | Natural Products
It's molecular imaging week! See Arr Oh and others have sent along this paper from Science, a really wonderful example of atomic-level work. (For those without journal access, Wired and PhysOrg have good summaries).
As that image shows, what this team has done is take a starting (poly) phenylacetylene compound and let it cyclize to a variety of products. And they can distinguish the resulting frameworks by direct imaging with an atomic force microscope (using a carbon monoxide molecule as the tip, as in this work), in what is surely the most dramatic example yet of this technique's application to small-molecule structure determination. (The first use I know of, from 2010, is here). The two main products are shown, but they pick up several others, including exotica like stable diradicals (compound 10 in the paper).
There are some important things to keep in mind here. For one, the only way to get a decent structure by this technique is if your molecules can lie flat. These are all sitting on the face of a silver crystal, but if a structure starts poking up, the contrast in the AFM data can be very hard to interpret. The authors of this study had this happen with their compound 9, which curls up from the surface and whose structure is unclear. Another thing to note is that the product distribution is surely altered by the AFM conditions: a molecule in solution will probably find different things to do with itself than one stuck face-on to a metal surface.
But these considerations aside, I find this to be a remarkable piece of work. I hope that some enterprising nanotechnologists will eventually make some sort of array version of the AFM, with multiple tips splayed out from each other, with each CO molecule feeding to a different channel. Such an AFM "hand" might be able to deconvolute more three-dimensional structures (and perhaps sense chirality directly?) Easy for me to propose - I don't have to get it to work!
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News
May 30, 2013
Here's a question for the organic chemists in the crowd, and not just those in the drug industry, either. Over the last few years, though, there's been a lot of discussion about how drug company compound libraries have too many compounds with too many aromatic rings in them. Here are some examples of just the sort of thing I have in mind. As mentioned here recently, when you look at real day-to-day reactions from the drug labs, you sure do see an awful lot of metal-catalyzed couplings of aryl rings (and the rest of the time seems to be occupied with making amides to link more of them together).
Now, it's worth remembering that some of the studies on this sort of thing have been criticized for stacking the deck. But at the same time, it's undeniable that the proportion of "flat stuff" has been increasing over the years, to the point that several companies seem to be openly worried about the state of their screening collections.
So here's the question: if you're trying to break out of this, and go to more three-dimensional structures with more saturated rings, what are the best ways to do that? The Diels-Alder reaction has come up here as an example of the kind of transformation that doesn't get run so often in drug research, and it has to be noted that it provides you with instant 3-D character in the products. What we could really use are reactions that somehow annulate pyrrolidines or tetrahydropyrans onto other systems in one swoop, or reliably graft on spiro systems where there was a carbonyl, say.
I know that there are some reactions like these out there, but it would be worthwhile, I think, to hear what people think of when they think of making saturated heterocyclic ring systems. Forget the indoles, the quinolines, the pyrazines and the biphenyls: how do you break into the tetrahydropyrans, the homopiperazines, and the saturated 5,5 systems? Embrace the stereochemistry! (This impinges on the topic of natural-product-like scaffolds, too).
My own nomination, for what it's worth, is to use D-glucal as a starting material. If you hydrogenate that double bond, you now have a chiral tetrahydropyran triol, with differential reactivity, ready to be functionalized. Alternatively, you can go after that double bond to make new fused rings, without falling back into making sugars. My carbohydrate-based synthesis PhD work is showing here, but I'm not talking about embarking on a 27-step route to a natural product here (one of those per lifetime is enough, thanks). But I think the potential for library synthesis in this area is underappreciated.
+ TrackBacks (0) | Category: Chemical News | Life in the Drug Labs
Here's a follow-up on the news that bexarotene might be useful for Alzheimer's. Unfortunately, what seems to be happening is what happens almost every time that the word "Alzheimer's" is mentioned along with a small molecule. As Nature reports here, further studies are delivering puzzling results.
The original work, from the Landreth lab at Case Western, reported lower concentrations of soluble amyloid, memory improvements in impaired rodents, and (quite strikingly), clearance of large amounts of existing amyloid plaque in their brain tissue. Now four separate studies (1, 2, 3, 4) are out in the May 24th issue of Science, and the waters are well muddied. No one has seen the plaque clearance, for one thing. Two groups have noted a lowering of soluble amyloid, though, and one study does report some effects on memory in a mouse model.
So where are we? Here's Landreth himself on the results:
It was our expectation other people would be able to repeat this,” says Landreth about the results of the studies. “Turns out that wasn’t the case, and we fundamentally don’t understand that.” He suggests that the other groups might have used different drug preparations that altered the concentration of bexarotene in the brain or even changed its biological activity.
In a response published alongside the comment articles, Landreth emphasizes that some of the studies affirm two key conclusions of the original paper: the lowering of soluble β-amyloid levels and the reversal of cognitive deficits. He says that the interest in plaques may even be irrelevant to Alzheimer’s disease.
That last line of thought is a bit dangerous. It was, after all, the plaque clearance that got this work all the attention in the first place, so to claim that it might not be that big a deal once it failed to repeat looks like an exercise in goalpost-shifting. There might be something here, don't get me wrong. But chasing it down is going to be a long-term effort. It helps, of course, that bexarotene has already been out in clinical practice for a good while, so we already know a lot about it (and the barriers to its use are lower). But there's no guarantee that it's the optimum compound for whatever this effect is. We're in for a long haul. With Alzheimer's, we're always in for a long haul, it seems. I wish it weren't so.
+ TrackBacks (0) | Category: Alzheimer's Disease
May 29, 2013
You'd think that by now we'd know all there is to know about the side effects of sulfa drugs, wouldn't you? These were the top-flight antibiotics about 80 years ago, remember, and they've been in use (in one form or another) ever since. But some people have had pronounced CNS side effects from their use, and it's never been clear why.
Until now, that is. Here's a new paper in Science that shows that this class of drugs inhibits the synthesis of tetrahydrobiopterin, an essential cofactor for a number of hydroxylase and reductase enzymes. And that in turn interferes with neurotransmitter levels, specifically dopamine and serotonin. The specific culprit here seems to be sepiapterin reductase (SPR). Here's a summary at C&E News.
This just goes to show you how much there is to know, even about things that have been around forever (by drug industry standards). And every time something like this comes up, I wonder what else there is that we haven't uncovered yet. . .
+ TrackBacks (0) | Category: Infectious Diseases | Toxicology
Here's another one of those images that gives you a bit of a chill down the spine. You're looking at a hydrogen atom, and those spherical bands are the orbitals in which you can find its electron. Here, people, is the wave function. Yikes.Update: true, what you're seeing are the probability distributions as defined by the wave function. But still. . .
This is from a new paper in Physical Review Letters (here's a commentary at the APS site on it). Technically, what we're seeing here are Stark states, which you get when the atom is exposed to an electric field. Here's more on how the experiment was done:
In their elegant experiment, Stodolna et al. observe the orbital density of the hydrogen atom by measuring a single interference pattern on a 2D detector. This avoids the complex reconstructions of indirect methods. The team starts with a beam of hydrogen atoms that they expose to a transverse laser pulse, which moves the population of atoms from the ground state to the 2s and 2p orbitals via two-photon excitation. A second tunable pulse moves the electron into a highly excited Rydberg state, in which the orbital is typically far from the central nucleus. By tuning the wavelength of the exciting pulse, the authors control the exact quantum numbers of the state they populate, thereby manipulating the number of nodes in the wave function. The laser pulses are tuned to excite those states with principal quantum number n equal to 30.
The presence of the dc field places the Rydberg electron above the classical ionization threshold but below the field-free ionization energy. The electron cannot exit against the dc field, but it is a free particle in many other directions. The outgoing electron wave accumulates a different phase, depending on the direction of its initial velocity. The portion of the electron wave initially directed toward the 2D detector (direct trajectories) interferes with the portion initially directed away from the detector (indirect trajectories). This produces an interference pattern on the detector. Stodolna et al. show convincing evidence that the number of nodes in the detected interference pattern exactly reproduces the nodal structure of the orbital populated by their excitation pulse. Thus the photoionization microscope provides the ability to directly visualize quantum orbital features using a macroscopic imaging device.
n=30 is a pretty excited atom, way off the ground state, so it's not like we're seeing a garden-variety hydrogen atom here. But the wave function for a hydrogen atom can be calculated for whatever state you want, and this is what it should look like. The closest thing I know of to this is the work with field emission electron microscopes, which measure the ease of moving electrons from a sample, and whose resolution has been taken down to alarming levels).
So here we are - one thing after another that we've had to assume is really there, because the theory works out so well, turns out to be observable by direct physical means. And they are really there. Schoolchildren will eventually grow up with this sort of thing, but the rest of us are free to be weirded out. I am!
+ TrackBacks (0) | Category: General Scientific News
May 28, 2013
Readers may recall the bracing worldview of Valeant CEO Mike Pearson. Here's another dose of it, courtesy of the Globe and Mail. Pearson, when he was brought in from McKinsey, knew just what he wanted to do:
Pearson’s next suggestion was even more daring: Cut research and development spending, the heart of most drug firms, to the bone. “We had a premise that most R&D didn’t give good return to shareholders,” says Pearson. Instead, the company should favour M&A over R&D, buying established treatments that made enough money to matter, but not enough to attract the interest of Big Pharma or generic drug makers. A drug that sold between $10 million and $200 million a year was ideal, and there were a lot of companies working in that range that Valeant could buy, slashing costs with every purchase. As for those promising drugs it had in development, Pearson said, Valeant should strike partnerships with major drug companies that would take them to market, paying Valeant royalties and fees.
It's not a bad strategy for a company that size, and it sure has worked out well for Valeant. But what if everyone tried to do the same thing? Who would actually discover those drugs for inlicensing? That's what David Shayvitz is wondering at Forbes. He contrasts the Valeant approach with what Art Levinson cultivated at Genentech:
While the industry has moved in this direction, it’s generally been slower and less dramatic than some had expected. In part, many companies may harbor unrealistic faith in their internal R&D programs. At the same time, I’ve heard some consultants cynically suggest that to the extent Big Pharma has any good will left, it’s due to its positioning as a science-driven enterprise. If research was slashed as dramatically as at Valeant, the industry’s optics would look even worse. (There’s also the non-trivial concern that if Valeant’s acquisition strategy were widely adopted, who would build the companies everyone intends to acquire?)
The contrasts between Levinson’s research nirvana and Pearson’s consultant nirvana (and scientific dystopia) could hardly be more striking, and frame two very different routes the industry could take. . .
I can't imagine the industry going all one way or all the other. There will always be people who hope that their great new ideas will make them (and their investors) rich. And as I mentioned in that link in the first paragraph, there's been talk for years about bigger companies going "virtual", and just handling the sales and regulatory parts, while licensing in all the rest. I've never been able to quite see that, either, because if one or more big outfits tried it, the cost of such deals would go straight up - wouldn't they? And as they did, the number would stop adding up. If everyone knows that you have to make deals or die, well, the price of deals has to increase.
But the case of Valeant is an interesting and disturbing one. Just think over that phrase, ". . .most R&D didn't give good return to the shareholders". You know, it probably hasn't. Some years ago, the Wall Street Journal estimated that the entire biotech industry, taken top to bottom across its history, had yet to show an actual profit. The Genentechs and Amgens were cancelled out, and more, by all the money that had flowed in never to be seen again. I would not be surprised if that were still the case.
So, to steal a line from Oscar Wilde (who was no stranger to that technique), is an R&D-driven startup the triumph of hope over experience? Small startups are the very definition of trying to live off returns of R&D, and most startups fail. The problem is, of course, that any Valeants out there need someone to do the risky research for there to be something for them to buy. An industry full of Mike Pearsons would be a room full of people all staring at each other in mounting perplexity and dismay.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History