Why you should avoid hospitals during the month of July

I haven’t posted in awhile, mostly because I’ve been so busy studying for (and hopefully passing) Step 1 of the US medical licensing exam. But I thought this would be a good time to write a post – for one, I have the day and weekend entirely free, and two, I start rotations in the various clinical specialties next week and am not sure how much free time I will actually have at that point.

So, why should you avoid hospitals during the month of July? To avoid run-ins with new interns and 3rd year medical students like me! No, I’m not trying to be self-deprecating…just realistic. Interns started at the beginning of this week – these are students who just graduated medical school and are starting the first year of their residency. Many of them are at new hospitals in a new part of the country, and have not had to assume a whole lot of responsibility up to this point. They’re nervous and still learning the ropes, and it will take time for them to feel comfortable with the situation.

Then there are 3rd year medical students like myself – we’ve had limited patient contact up to this point, but now we are thrust into the hospital and assigned patients to follow as we rotate between specialties like surgery, medicine, pediatrics, etc. We have limited clinical skills, but are expected to hone them as we complete the year (this includes things involving needles and patients – such as inserting IV’s and drawing blood…scary thought). We may not have much say (read: almost none) in what happens to the patients, but since we are new to the hospital we inevitably don’t know much about what’s going on and will slow the well-oiled machine down.

These factors contribute to longer stays and higher rates of mortality during the month of July (http://www.nber.org/papers/w11182). Just another reason to be careful this July 4 holiday.

~ Lily


Who says our kids don’t learn anything in school?

The Baltimore Sun reports on a group of city high school students threatening to starve themselves unless their demands for taxpayer funds are met.  I wonder if it ever occurred to them to try raising the money themselves through voluntary donors?  Probably not.


Newsflash: Researchers Discover Peer-Pressure

I’m thinking of making this the first post in a series on news that is not news but soon will be. The Washington Post is now reporting that the behavior of your peers influences how you behave. I…am…shocked. I never imagined that being surrounded by people eating pizza might make me want a slice. Even less did I suspect that hanging out with drunks might make it easier to get a drink. I think I need to quit my job, spend a week in the woods and reevaluate my life. Apparently my belief in a personal identity has been a lie.

In all seriousness, however, couched within this absurd restatement of the obvious is both a confession of bureaucratic ineptitude and an ominous signal that we’re probably in store for even more of the same:

“What all these studies do is force us to start to kind of rethink our mental model of how we behave,” said Duncan Watts, a Columbia University sociologist. “Public policy in general treats people as if they are sort of atomized individuals and puts policies in place to try to get them to stop smoking, eat right, start exercising or make better decisions about retirement, et cetera. What we see in this research is that we are missing a lot of what is happening if we think only that way.”

Mr. Watts would probably feel very comfortable in a discussion of “libertarian paternalism” such as the one that took place at the Cato Institute recently (audio link here). I think Will Wilkinson, who took part in that event, has an excellent perspective on the issues (you can get a taste here). My own two cents is that the government is by its nature, slow-moving, inflexible, and unaccommodating so that it should come as no surprise that public policies trying to change peaceful behaviors are ineffective. The more we depend on bureaucrats, in place of our friends and family, for advice, the more we are likely to perpetuate foolish mistakes. Keep an eye on people that think they know how to lead your life better than you do. They don’t.


Is more freedom the solution to the global food crisis?

According to “AfricanLiberty.org“, it is:

Definitely makes you think about the unintended consequences of our policy toward developing countries, as well as the policies within those places. It’s the poor who are hungry and suffering because we underestimate the effects of meddling with the market. Just something to ponder the next time you consider filling you car up with ethanol fuel.

via Tom Palmer at Cato@Liberty

A Flash in the Pan

I just finished reading The World Without Us by Alan Weisman. It is a superb marriage of speculation and science. The entire book is basically a fleshing out of the quote by architect Chris Riddle that appears at the beginning of Chapter 2: “‘If you want to destroy a barn,’ a faer once told me, ‘cut an eighteen-inch-square hole in the roof. Then stand back.'” Weisman paints a compelling picture of how much human technology has changed the world we live in and also how quickly nature will change it back once we’re gone. Though he is firmly committed to the value of natural processes and natural beauty, Weisman’s tale is remarkably even handed, acknowledging, for the most part, the equally splendid and valuable contributions of humanity’s presence. If you are concerned for the environment and quality of life, you should definitely check it out. If not, and you just want something entertaining, you should still pick up a copy.


How Progressives Use False Dichotomies to Expand Government

Some people find Barack Obama inspirational. I do not. Unfortunately, charisma is not a proposition with which you can argue. It is a personal trait that has almost nothing to do with a man’s beliefs or intentions, and yet in democratic politics it seems to carry more weight than either. Therefore, I feel fairly confident that Senator Obama will be President Obama come 2009. Nonetheless, I think Mr. Obama’s commencement address delivered today at Wesleyan University should provide cause for concern to anyone who values privacy and personal autonomy. A transcript of the speech is available here, and you should read it to form your own opinion both as to the strength of Obama’s rhetorical skills and the quality of his vision for the country. If you want to know what’s wrong with both, you can continue below.

Read the rest of this entry »

Why in the hell do people wait so long to get married?

I was 21 when I proposed to my wife. I was spending the semester studying abroad in Italy, and I was surrounded by Italians and other Europeans who looked at me aghast when they learned that I was engaged to be married. I didn’t think much of it at the time. After all, I’d heard plenty about how Europeans, and Italians, in particular, were notorious for prolonging childhood well into their adult years (i.e. 30 year-olds living with their parents, declining birth-rates, etc.). If anything, I probably thought they were right, I was a little bit weird. Even by American standards, Lily and I were preparing to tie the knot well before most of our peers. Here’s a graph I put together from data available at Wikipedia (don’t know what’s wrong with the thumbnail, but the link seems to be working):

As you can see, the United States is somewhere in the middle of the pack across the nations surveyed, but it’s an outlier with respect to its developed cohort. Even still, the average age at first marriage for American men is 27, for American women, 25. From what I gather through personal experience and anecdote, though I don’t have any data at hand to back this up at the moment, those averages increase as people climb the economic ladder. As I’ve gotten older, and gained a little more perspective, I’ve become more and more perplexed by this trend, and I have to ask, what’s going on?

Here’s what I see. Most people I meet (right now heavily weighted towards yuppies and aspiring yuppies)that are my age (currently 26) are unmarried. Most of those approve of marriage as a general concept. Most of those (though a decidedly smaller percentage and with much less confidence) want to have kids. Yet very few of these are currently involved in long term relationships, and those that are seem to have no definite plans on actually getting hitched. Why not?

There was an interesting article in the Atlantic a few months ago describing the situation from the perspective of older-single women. The author even went so far as to propose a solution for younger single women: stop waiting for Mr. Right and start settling for Mr. Good Enough. Do you think she’s right? Are people really just too picky? Seems possible, but I think that may be giving most people too much credit. It seems like that might be a relatively painless way of averting an acknowledgment of fear of commitment. If that’s the case, I have to say that most people seem to have an idea of commitment which runs counter to my experience with marriage. Commitments are hard. Marriage is easy. Getting up every day to go to work is exhausting and requires a conscious effort to balance the rewards (paychecks) with the costs (cubicles and crowded subways). Coming home at night to spend time with my wife is easy. Chasing tail in bars is hard. Going to the movies, taking walks, and eating dinner with your best friend is easy. Having kids by yourself is hard. Having kids with a spouse is a little less hard.

So I ask you: Are you married? Do you want to be? When do you think you’ll get married? What are you waiting for? Just curious.


Ends and Means

Would you rather be free to buy a hot dog and a pack of cigarettes outside a king’s palace or stand in a bread line on the steps of parliament? That’s the question, in so many words, posed by Arnold Kling in his post discussing a recent book on foreign policy by Thomas Barnett. It’s an interesting and useful question that too my mind receives far too little discussion in contemporary American media.

Just this morning, Lily and I were listening to a piece on NPR talking about the infamous butterfly ballots from the Florida presidential election of 2000. The gist of the piece the increasing lack of trust Americans have in the ability of our democratic processes to reliably communicate the will of the people to men and women occupying the seats of power in the government. The main interviewee seemed very upset by the lack of transparency and wished to restore the confidence that had been lost. My assessment was not so uniformly gloomy. While I do believe that a lack of transparency is a bad thing, I am heartened by the potential that more and more people may find themselves driven to engage in the “eternal vigilance” advocated by Thomas Jefferson. As I see it, there are two primary solutions to dealing with a lack of trust in government: we can 1) attempt to get the government to institute procedures that will improve transparency and accountability, or 2) attempt to reduce the size and scope of the government so that potential corruption and incompetency are simply less troublesome because they have a smaller impact on our lives.

Democracy in America was originally conceived as a means to an end: namely, securing the widest scope of personal autonomy consistent with the preservation of equal rights among men. It has since been co-opted by the envious and elevated to a righteous end in itself in order to justify theft, redistribution and oppression. It is a simple fact that political processes are far less responsive to personal preference than free markets. Economic liberty is far more important than political enfranchisement. A house cannot stand without a foundation, but a foundation is worthless without a house. Both Barack Obama and John McCain seek to keep the population focused on constantly fixing cracks in the basement of political representation even as the roof of prosperity collapses on our heads.


It’s the lifestyle, stupid!

I was reading a NY Times article discussing the top residency choices for this year’s match (which I think is today – the day when graduating medical students find out where they will be going for the next 3-5 years of their training). The article follows a married couple from Harvard hoping to get into a dermatology residency, which, along with plastic surgery and ENT are easily the most competitive spots to snag.

Call me cynical, but the quotes from the med students about why they chose derm sound more like BS spouted within their personal statements and along the interview trail rather than what’s really motivating them. Consider this gem:

Ms. Singh said she initially planned to emulate her mother, a physician who focuses on treating major adult diseases.

A lecture on skin-pigment conditions like vitiligo changed her mind.

“Nobody can see if you have hypertension or asthma, but everybody knows if you have a pigmentary disorder and these changes are a lot more obvious and devastating to patients with skin of color,” Ms. Singh said.

I’ll tell you what changed her mind – she survived medical school and is probably graduating at the top of her class with multiple research publications. This means she can have her choice of specialty…add onto that the fact that she already has 2 young children – why in the world would she go into general surgery or family practice, when she can work 35-40 hours a week (or less!) and make good money? (let’s not forget she has six-figures worth of student loans to pay off) Unless she absolutely loved some other specialty, she would have to be a little crazy not to consider dermatology. I’m not trying to downplay the importance of dermatologists – appearance is obviously very important in our society even if it isn’t life-threatening, and they definitely treat people with more “serious” conditions like skin cancer. But there is no reason for dermatology to need all of our best and brightest, other than the fact that it is the epitome of a lifestyle specialty.

What I also find amusing, which is not discussed within this article, concerns forthcoming physician shortages. You would think that a specialty as lucrative and competitive as dermatology would see no shortage amongst its ranks – after all, they more or less have their choice of any medical student available and should have no problem keeping up with demand. Yet even dermatologists are facing similar shortages as other areas of medicine. It can take weeks or months to get a referral to a dermatologist. This is not the NHS, where such waits would be expected because of the role of the British government to cap spending – this is the US, with a supposed “free-market” healthcare system. And then you begin to realize (if you haven’t already) that we’re not a free-market health care system. The reason for physician shortages are due in large part to licensing restrictions (the MD monopoly, or “medical cartel” as it’s sometimes affectionately called)…medical schools keep their class-sizes low (though many are finally starting to expand), and lucrative residencies keep their available spaces artificially low. Doctors of those “chosen” specialties like derm get to work as much or as little as they want, and make some serious money. Specialists get their big houses, fancy cars, and afternoons free to play golf, while the regular folks get longer waits and higher prices (which means higher insurance premiums, which inevitably results in more uninsured folks because they can no longer afford to pay the high premiums). See how this all starts to fit together? You should start to ask yourself why physician groups oppose or place restrictions on other non-physician health care providers (“Minute Clinics” with nurse practitioners, chiropractors, optometrists, etc)…are they really only looking out for your best interests, or could it be possible that they are also concerned with their own pocketbooks and lifestyle? Just sayin…

~ Lily

Those drug reps are good! (or…er…bad?)

I shadowed an internal medicine specialist last week, and as I was trailing behind him like a lost puppy (ah…the joys of being a clueless medical student) someone who appeared to be his buddy joined us as we reviewed charts and examined some imaging studies that had been ordered. The doctor and his buddy (who I presumed to be another doctor given that he was wearing a pair of blue scrubs) talked about their plans for the weekend, what topics should be presented at an upcoming conference, and then started discussing the pros/cons of various procedures and new techniques that might be helpful.

It was at this point that I got a closer look at the “buddy”, and noticed something inscribed on his scrubs that was neither his name nor the name of the hospital…it was the name of a drug company! Eek! I had been fooled! He was handsome, charming, and seemed to know the medical lingo (perhaps his good looks and perfect hair should have been the first clue?). He followed this doc around for a good 3 hours (maybe even more since I had to leave), including being present for a procedure on a patient, which seemed entirely unnecessary given that there was no “product” being used on this particular person. Drug reps definitely know which doctors respond to their attention, so the doc I was shadowing must have been a huge fan of their products.

It was weird to see the close relationship the two had, but did the patients know the physician was getting followed around all day like that? That he was getting paid to speak at conferences on behalf of this company? I don’t have a problem with drug reps per se…I understand products need to be marketed and sold, and I’m all in favor of competition…but if you publish a paper in a journal, you have to disclose all of your financial ties to show any potential bias you might have. Shouldn’t you do the same for your patients?

I’ve set a goal for myself for next year (my 3rd year of medical school, when I start rotations in the different specialties) – I’m going to see how long I can last without taking a single thing from a drug rep. No pens, no free lunches, no little gadgets…nothing. It’s mostly because I want to force myself to be aware of all the different ways the drug companies woo doctors/nurses/students…and because I like a good challenge. If I last a week, I’ll be proud…if I last a month, I’ll be amazed (free food is really hard to pass up when you’re poor and in a hurry). I’ll try and update the blog with all of the cool stuff I’m passing up, as well as how long it takes before I succumb to peer pressure. It’ll be fun! 😉

~ Lily

In the Foxhole

The Washington Post reports:

A soldier claimed Wednesday that his promotion was blocked because he had claimed in a lawsuit that the Army was violating his right to be an atheist.

I’m sure you’re all shocked. I found my own thoughts being expressed eloquently by one of the plaintiff’s supporters:

Mikey Weinstein, president and founder of the religious freedom foundation, said the lawsuit would show the “almost incomprehensible national security risks to America” posed by the military’s pattern of violating the religious freedom of those in uniform.

“It is beyond despicable, indeed wholly unlawful, that the United States Army is actively attempting to destroy the professional career of one of its decorated young fighting soldiers, with two completed combat tours in Iraq, simply because he had the rare courage to stand up for his constitutional rights,” Weinstein said in a statement.

How can we be opposing religious fascism in Iraq and the Middle East when our military is actively promoting it here?



Thoughts on the John Ritter case

I was perusing my blog feed this morning and came across a CNN article discussing the wrongful-death lawsuit against doctors who treated the actor John Ritter. Ritter was treated for a heart attack, when in fact he was actually suffering from a “torn aorta” as the article puts it, which in more technical terms is an “aortic dissection.”

It’s tragic that he (or anyone for that matter) has to die at such a relatively young age – 54 is much to early to go. But for me the significance of the case is to serve as a great reminder of how non-scientific medicine can be. Don’t get me wrong – there’s a lot of science to medicine, and a lot of treatments are prescribed because of solid evidence and many years of research comparing different treatments with outcomes. You’d think by now we’d be experts at treating someone who comes to the emergency room complaining of “chest pain”, until you realize how many different problems can present themselves under that single descriptive term. We’re taught in medicine to come up with a list of possible diseases each time a patient complains of a certain ailment – we call this the “differential diagnosis” and it includes both what we think is likely to have occurred as well as a list of long-shots. For instance, if someone came to the hospital with “chest pain” a doctor would consider serious problems such as heart attack, pulmonary embolism (blood clot to the lung), aortic dissection (torn aorta), pneumothorax (collapsed lung), and cardiac tamponade (blood around the heart that limits its pumping ability)…but would also consider benign problems such as indigestion, esophageal spasm, etc. They then go through their list and try to target their questions to rule in/out the various conditions, focusing on the more serious ones first since those pose the most immediate threat. In the case of John Ritter, his family history of heart problems may have come into play. Medications, alcohol/tobacco use, or previous medical ailments might also influence which diagnosis you lean towards.

I should add at this point that I’m not a doctor, let alone a cardiologist – I’m a medical student with a very introductory understanding of how this process works. Most emergency room doctors would have probably considered the various problems listed above, taken a detailed history of the patient, and then decide which course to pursue. If they suspected a heart problem, they would probably obtain a chest x-ray as well as an EKG to look for electrical abnormalities that might indicate a heart attack (and call the cardiologist to come down and examine the patient). But this is where it starts to get tricky – an EKG is a great tool, but will not always show electrical changes even if someone has had a heart attack or is on the verge of a heart attack. Thus if the patient’s history strongly suggests heart attack but is not confirmed by an EKG, the doctor might still treat as if it were so. I have no idea what happened in the case of John Ritter, but perhaps that is one possibility of what took place. The classic presentation of aortic dissection is sudden onset chest pain that migrates…if Ritter’s pain wasn’t radiating (or if the doctor didn’t ask whether it was radiating) that diagnosis might be missed – that doctor would not have ordered a CT scan to look for tearing of the aortic wall, and might go on to treat as if the patient had something else.

Even if it is caught, aortic dissection is a terrible diagnosis with a very high death rate – surgery is required to immediately fix the tear before it occludes blood flow to vital organs and causes permanent damage/death.

In summary, I’m not writing this post to make excuses for the doctors involved in the case – I have no idea as to the specifics involved and what was or wasn’t considered. I’m merely trying to provide insight into how this whole medical process works, since most people outside the system are entirely clueless. Medicine is a wonderful tool with the potential to have huge impacts on our health and quality of life…but it is a mix of science and art, with the two frequently so intertwined that it may be difficult to distinguish where one stops and the next begins. Fancy tests only tell you so much, and are generally meaningless without a thorough history of the patient (I wonder how much money we could save by simply providing more time for talking with the patient, which may then allow us to avoid having to use the fancy high-tech toys at our disposal). Regardless of whether John Ritter’s death was the result of a medical mistake or an inevitable outcome, it’s tragic that he had to die at a young age. Hopefully medicine will evolve to provide more accurate distinctions between the various types of “chest pain” so that such tragedies may be avoided in the future.


CNN article

Perfect vs. Good – Round 1

The Baltimore Sun reports today that Vermont is considering lowering the legal drinking age in the state.  This is a good idea.  A better idea would be to eliminate drinking age requirements entirely, but I am a firm believer that we usually ought not let the perfect stand in the way of the good, so I would gladly support such a measure.

Of course, since it’s been brought up, it seems worth it to ask how we got in the present situation, with uniform nationwide drinking age laws, in the first place.  The Constitution does not grant the federal government the authority to legislate on such matters.  We once recognized this which is why the original proponents of nationwide alcohol restrictions had to pass an amendment to see their plans implemented.  Of course, we all know how poorly prohibition turned out then, and that the amendment was itself amended and revoked in short order.  So what happened next?  Well, in 1984 congress passed the National Minimum Drinking Age Act which extorted state compliance with the threat of withholding federal transportation funds.  Somehow the Supreme Court found that the legislature could require indirectly what they once could not demand explicitly.   It truly boggles the mind.

My favorite quote from the Sun piece came from John McCardell, the former college president leading the movement to lower the drinking age, “If Congress would grant a waiver, the states would be willing to try something, and at least then we could get some evidence and see whether things are better or worse.”  What a sad commentary on our present condition that states must petition the federal government for permission to participate in federalism.


Learn to Be Yourself

Over at Econlog, both Arnold Kling and Bryan Caplan weigh in on Brink Lindsey’s analysis of the obstacles facing lower-income Americans seeking a decent education.  Both are skeptical of the prospects of school-choice reforms (or any reforms for that matter) with respect to improving the children’s educational outcomes.  Arnold:

“I am somewhat pessimistic on competition in the school system as a panacea. I favor it, of course, but I suspect that the benefits would show up more in lower costs than in better outcomes.”


“In short, parents are correct to think that they can change their children. Their mistake is to suppose that the change will endure. Instead of thinking of kids as lumps of clay that parents “mold,” we should think of kids as plastic that flexes in response to pressure – and springs back to its original shape once the pressure goes away.”

In both case, I think the economists are too caught up in discussing the response of a particular metric as opposed to exploring the full breadth of the topic available.  For instance, I think Floccian, commenting on Arnold’s post, hits the nail on the head:”We should not think that children can learn more in school but maybe they can learn more useful things.”

This reflects a dichotomy that I’ve noticed within libertarian discourse, you can either focus on the economic aspect of a given policy, or attempt to take on its moral foundations.  Even though the two angles are more complimentary than anything, they rarely appear in the same conversation, and it seems the the economic perspective is the more dominant of the two.  This is unfortunate, since, in my view, the logical foundations of individual natural rights are at least as robust as the predictions of material prosperity that flows from a free market.  In the case of education, empowering children to discover what type of work they both enjoy and are good probably deserves a higher place than the current emphasis on improving their test-taking abilities.


Exhibit A: The Language of God is Babel

The Pew Forum on Religion and Public Life has just published a massive new report detailing the shape (or lack thereof) of the religious landscape in America. You can check out the results for yourself, here, but the primary feature that emerges from the analysis is turmoil, which confirms the views I expressed earlier, here. Unsurprisingly, Americans are overwhelmingly unsatisfied with the religions of their youth: “roughly 44% of adults have either switched religious affiliation, moved from being unaffiliated with any religion to being affiliated with a particular faith, or dropped any connection to a specific religious tradition altogether.” Also unsurprisingly, Americans are increasingly dissatisfied with religion in general: “those Americans who are unaffiliated with any particular religion have seen the greatest growth in numbers as a result of changes in affiliation.”

Why should these results not come as a surprise? Well, if you believe as I do that humans are uncomfortable with beliefs that are not supported by their daily experience, and that most actions are motivated by the desire to relieve discomfort, then the inexorable drift in the marketplace of ideas will be towards the truth, which is, in this case, that we inhabit a godless universe.

To be sure this movement towards acceptance of reality is an unsteady march, but then again, there are still people out there hunting sasquach and buying HD-DVD players. The only difference is that we can already openly laugh at the bigfoot brigade, while we might have a few years yet to go before politicians claiming an intimate relationship with a magical sky fairy are booed off stage. However, given the exponential growth in knowledge and access to that knowledge, that day might be even closer than we imagine.


« Older entries Newer entries »