Here it is. Finally. The second blog post on orthorexia. Now, if you are new to this website and have no idea what orthorexia is, then you will want to read Orthorexia Nervosa I: Women Laughing Alone with Salad before continuing with this post.
A good companion piece to understanding the concepts of Orthorexia can also be found in this blog post too: Intellorexia: Hyper-Intellect Enslaved by Anxiety.
This piece has frankly been hard to generate and complete. I had an unused 3000 odd words sitting in another aborted attempt on the topic and then weeks of being completely stalled.
But more recently, as some of you know from my recent apologia on the forums, I have picked up a book by Kathryn Schulz: Being Wrong: Adventures in the Margin of Error and suddenly I have discovered I have something I actually want to say on the topic of orthorexia.
While I was attempting to methodically drag myself through all the various ‘healthy’ eating fads (there are so many) and deconstruct the failures of logic within each one (not to mention the extreme lack of supportive science or data), I realized I want to discuss the forest and not the trees.
Diet as a Fix-All for Chronic Disease?
In living memory, there was a time before antibiotics. Shortly out of living memory there was a time before any kind of inoculation or immunization save getting the infection itself and hoping you were lucky enough to survive it.
I was immunized against smallpox as a child. The last case of smallpox in the world occurred in 1980. That means many of you reading this never lived in a world with smallpox. It was the pinnacle of modern medicine— the eradication of a deadly pathogen within our human population.
Eisenhower, on his leaving office as president of the United States, spoke of the importance of being wary of the military industrial complex—“we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military–industrial complex.” [D Eisenhower, January 17, 1961]
He essentially argued that the massive investment in a military establishment resulted in our dependency on its existence for economic and social reasons and that, in turn, means that it wields disproportionate influence on society.
While we may not have particularly heeded Eisenhower’s warning, we were clearly also not able to identify that the same now holds true for our medical industrial complex.
I have already spoken of the medical industrial complex in Part VII of the Fat Series.
The level of organization and investment necessary in our health care systems when we instigated massive immunization and inoculation programs in the first half of the 20th century were very successful in developed nations. But of course now we have massive health care systems left in the wake of that effort.
As a result, we simply turned our attention to things that don’t kill us outright, but can make our quality of lives miserable and/or shorten our lifespans. Most of us are fortunate enough to live in the world where the greatest risk to life and limb is a chronic condition. And I mean to say “fortunate” because dying of smallpox was a very nasty and brutal way to go, and there are still millions who face curable disease and yet die of them every day in developing nations.
But there are no inoculations for heart disease, diabetes, innumerable autoimmune conditions, neuralgias of all kinds, and several degenerative conditions as well. And in the face of no solution for our finite existence, humans depend upon one of their most brilliant, and most fallible, skills: pattern identification.
And there is an entire medical and health industrial complex ready and willing to take our pattern identifications and run with them (our money in their hands).
The Practice of Science
We all learned about scientific practice in grade school. We know about developing a hypothesis and then attempting to disprove the hypothesis, but we are often not told why such a process might have any use for us in everyday life.
Humans can see faces in clouds. They can become physically nauseous at the thought of eating a type of food that happened to coincide with getting sick in the past. They can make snap judgments about everything and everyone, often in lifesaving ways and even more often in completely wrong ways.
In the absence of the practice of science, we depend completely upon our pattern sensing abilities. There are, in reality, no actual faces in those clouds. The reason you got sick was not that chicken dinner, but actually that someone with the flu sneezed near you three days before that chicken dinner. And the man you thought was planning on attacking you was actually racing to return the keys that you had failed to notice you had dropped.
The practice of science is really an attempt at forcing our brains to work beyond pattern recognition as a way of trying to uncover the truth.
The Hierarchy of Data
The data that are revealed through scientific inquiry are not all created equal. Here is the hierarchy of data:
6) Survey questionnaire results, personal recollection, chatting with a friend
5) Case study
4) Epidemiological study
3) Randomized controlled trial
2) Single-blinded randomized controlled trial
1) Double-blinded randomized controlled trial
Surveys and Such
Human beings hold a lot of stock in first-hand or second-hand experience. If a friend tells you she has taken antidepressants and it has worked out swimmingly well for her, then you are going to weight that input as evidence of the value of considering taking antidepressants yourself.
But we are running around in modern times in real need of a ‘software upgrade’ for our brains when it comes to how we determine whether the information we have is definitive or not.
No question if you were not able to heed the first-hand account of a friend who told you that a particular watering hole is crawling with crocodiles then you would be risking your life. But whether your friend’s experience with antidepressants is relevant to you or not requires that you apply the practice of science over top of your predilection to be drawn to such anecdotal evidence.
Science is a practice of inquiry that is really set up to try to workaround our brains’ limitations and tendency towards biases that make us believe something is a fact when it is not.
Many of you already know that the guideline to eat 2000 calories a day is based on the lowest common denominator in the hierarchy of data: survey responses. I cover the details off in I Need How Many Calories?!! and Do the MinnieMaud Guidelines Apply in My Case? The issues with surveys and self-reports are that human beings are fallible in their recollections, generally lack absolute reporting accuracy, and phrasing of the survey questions themselves can further skew the responses.
Surveys give us some jump off points for further study and that’s about it.
A case study is like being able to quiz your friend on her first-hand experience by speaking with her entire medical and/or psychiatric team and being able to sift through all her relevant test results and biomarkers. But the results that were achieved are not statistically relevant.
What does “statistically relevant” actually mean? Well it can mean a bunch of things, but at its core it means that whatever is being looked at has not happened by chance.
Imagine you come across a case study of patient “M” who was successfully treated for a restrictive eating disorder by undergoing twelve sessions of acupuncture. Although there is enough clinical data to prove “M” was indeed in remission at the end of twelve acupuncture sessions, the problem is whether her remission happened spontaneously, for other unknown or unmeasured reasons, or whether using acupuncture for eating disorder treatment is statistically relevant (meaning I could apply acupuncture to other patients and I will see remission more often than what I would expect to see by chance).
You have probably come across articles where the journalist interviews centenarians to find out how they lived so long. Dutifully, the centenarian will say it was eating fish, not eating fish, drinking wine, not drinking wine, having the odd scotch, tea-totaling— the point is these are all examples of hindsight biases and they have nothing to do with cause and effect.
Now let's imagine I take 10,000 people and I study them in an epidemiological study over 35 years to try to figure out what diet appears to provide the least amount of disease and death. What these kinds of studies show us is trends. For example, epidemiological studies show that a diet higher in fresh fruits and vegetables lowers risk of disease and death.
There are several issues with confusing the results of an epidemiological study with the causative results that can be achieved through controlled experimentation. First of all, despite the brilliance of epidemiologists, there is no way they can account for the myriad environmental inputs to ever suggest causation in any but the most blatant outcomes.
Don’t get me wrong, I am a great fan of epidemiological studies as they are often the only way that ominous disease-generating factors can be uncovered in our environment. You have to start with uncovering correlations before you can get to what, if any, causative relationship might be present. Epidemiology uncovers correlation.
The fact that women who work in automobile plastics facilities have higher than expected levels of breast cancer was uncovered through epidemiology. In that particular case, the data were uncovered through a case-controlled study (not the same as controls used in randomized experiments which I’ll get to) [JT Brophy et al., 2012].
Case-control epidemiology means that you have some means of comparing the group you are reviewing with another equivalent group that is, in all other respects, no different to your study group except for the key factor or factors that you wish to study. So for the study on breast cancer, mentioned in the previous paragraph, the researchers looked at breast cancer data from hospitals (it was a multi-site, multi-country review) and had equivalent control cases who were in all other respects similar to those working in the automobile industry, except they did not work in that industry.
Does eating more fruits and vegetables lead to less illness and longer life? We don’t know. Read that again. We don’t know.
The only way we could know whether it really leads to less illness and longer life is if we had an experiment where all other factors that could be influencing the outcomes of illness and death can be ruled out as influencers. Epidemiology studies things out in the world and not in the lab and that means results are always indications but not definitive outcomes.
We know that those in higher socioeconomic levels in our society have much greater access and the financial wherewithal to get and eat fresh fruits and vegetables. We know that those in lower socioeconomic levels of our society have greater rates of illness and lower life expectancies and they often have far less access to fresh fruits and vegetables as well [D. Hendrickson et al., 2001].
But would providing fresh fruits and vegetables to those of lower socioeconomic levels change their rates of illness and improve life expectancy? Everyone assumes so, but there is no evidence to back up that assumption.
It appears as though instigating urban community farms in low-income areas of a city is a great thing and provides very affordable access to fresh fruits and vegetables to those low socioeconomic levels of our society. But we have no data on whether it has any positive impact on morbidity and mortality rates and if it did, is it because the involved community has allowed for individuals to experience the health protective benefits of lowered isolation, or because veggies and fruits are part of their diet? Exactly.
Randomized Controlled Trials
How could we find out if eating fresh fruits and vegetables is so powerfully health protective that it overrides all other factors and inputs in an individual’s life experience such that she realizes a much healthier and longer life no matter what?
In order to confirm whether the correlation of eating vegetables and fruit being linked with longer healthier lives is actually due to eating vegetables and fruit (and not some other unidentified and undetected factor), we have to apply a randomized controlled experiment.
And there is not going to be such a lifelong laboratory experiment where we have a randomly chosen bunch of human babies to be placed in a laboratory setting that differs from the control group’s laboratory life in only one way: they eat vegetables and fruit and the control group does not.
And although we do this lifelong laboratory experimentation on animals, when it comes to food experimentation we have a serious problem in extrapolating outcomes for mice, rats or monkeys as being at all relevant to humans. We are the only creatures on the planet optimized to eat cooked food. That is a very important distinction. While we might able to identify that rat stress and human stress are comparable, if we feed rats cooked foods they lack the 800,000 odd years’ worth of evolutionary development to respond to that food the way humans do.
The best way to figure out if say veganism confers any protective value on otherwise healthy human beings would be to do a randomized trial. Take several hundred people and randomly place each one in one of three groups: vegan, vegetarian and control. In carefully screening upon entry into the study for pre-existing conditions and disease, we could feel fairly confident that we can now have them run out their lives and we would be able to determine whether veganism really does provide any statistically beneficial morbidity and mortality outcomes.
Problem is you cannot run a study like that. Veganism is a diet that is particularly difficult to adhere to (the vast majority are vegan for 1-3 years and 94% are ovo-lacto vegetarian after that point). Therefore most will not finish out the study in such as way as we will have any data to crunch into some kind of statistical relevance. And never mind the fact that folks wouldn’t be too keen to live in a laboratory for decades either!
Of course, studying the value that a diet has upon an existing diseased community is much easier. First of all, we don't need to run longitudinal studies (the benefits in symptom reversal will likely be noticeable within weeks) and we have a motivated community of participants.
In the end randomized controlled trials (not-blinded, single-blinded and double-blinded) are applied on humans with existing disease states and otherwise healthy controls for short periods of time as a way to at least determine if dietary changes can ameliorate a disease state or not.
What will you gain when you lose?
The subheading you see here is lifted from one of the most recent Special K cereal promotions in North America. The campaign is most perfect example of leveraging provisional living: “I will be “xyz” when the scale reads “abc”.”
Not only that, it’s a reinforcement of orthorexic belief systems on a massive scale. Why do I bring this up in the middle of a discussion of ‘healthy’ diets? Because while I have been addressing ‘healthy’ diets framed as the pursuit of living long and healthy lives, in fact the pursuit is actually a ball of anxious wool that intertwines existential angst, social acceptance and fear of isolation in with top-of-mind concepts of health and longevity.
The unnatural coupling of the medical industrial complex with the time immemorial human condition of knowing you are a creature bounded by space and time has generated the monster in our society that is orthorexia.
Attempting to engage in the circular thought process of what you gain when you lose is just chasing the pink dragon (a reference to the TV series South Park and its Heroin Hero episode). The original reference to chasing the dragon was to do with opium use, but it is often used in reference to drug use generally. That first high is perfect. And from that point forward you are forever chasing that perfection with more drugs and commensurately more misery until there is nothing left.
I will get much more philosophical later on in this post when I discuss the forest of orthorexia rather than the individual dietary trees, but for now let’s return to the dietary trees for a bit longer.
If you take one thing, and only one thing, away from this entire blog post let it be this:
All diets and variations of diets that are identified as “healthy” are going to fall into one of these three categories:
- Actually so restrictive in food groups (or amount) as to be identifiably (with clinical data) unhealthy, or deadly.
- A diet that has been identified as weakly correlating in epidemiological studies with health and longevity where too many unaccounted for factors in the results render the data essentially irrelevant (seeing faces in the clouds).
- A diet that has been experimentally proven to reverse symptoms associated with an existing disease state that has subsequently been wrongly applied as though it also has preventative value.
Confusing Cause and Effect
Some of the following information will be familiar to those of you who were active on the 2012 forums as I am quoting myself from some of those threads:
Let’s look at my own scenario as a good example of the pitfalls of adopting a diet that removes food groups because it confers symptom alleviation for those with a specific disease state.
Based on genetics and the country in which I live chances are I should have a life expectancy somewhere in my late 80s. However, I developed celiac disease (which runs on the maternal side of the family). My celiac disease was undiagnosed for likely 7 years before it was identified.
At that point if I had continued to consume gluten, knowing I had celiac disease, for the next few decades (meaning I ignore my diagnosis), then I can expect to drop my life expectancy by perhaps 12 years or so (because the damage that gluten does to my body left unchecked can lead to a variety of serious problems).
By following a gluten- replacement diet I have (relatively speaking) pushed back up my life expectancy. But, and this is important to note, it likely is not to the late 80s point that I may have expected had I never developed celiac disease in the first place.
So when we say that a gluten-replacement diet is beneficial to me, we are talking in relative terms. Far more beneficial (from both a morbidity and mortality point of view) would have been to have not developed celiac disease in the first place!
Although I mentioned the issue of compromising immunity if you adopt restrictive diets in the absence of existing disease states in Tummy Troubles in relation to the drop in Bifidobacteria associated with adopting the FODMAPs diet, the same drop is seen when healthy subjects apply a gluten-replacement diet too: “the [gluten-free diet] led to reductions in beneficial gut bacteria populations and the ability of faecal samples to stimulate the host's immunity [emphasis mine].”[G De Palma et al., 2008]
If you are eating a gluten-free diet and you don’t have celiac disease; if you are eating a vegan diet and don’t have existing heart disease or rheumatoid arthritis; or if you are eating a low-protein diet and don’t have kidney disease, then you are not preventing illness or death— you are hastening it.
There is a net benefit to restrictive diets (where food groups are off-limits) only when the existing chronic disease state is severe enough that it can so take the boots to your quality of life and longevity that you are going to add some years and quality by keeping foods off your list. But if you are healthy, then you are actually removing quality and years of life by removing food groups from your diet.
Restrictive Eating Disorder ≠ Health and Longevity
What if you have the misfortune to have rheumatoid arthritis and a restrictive eating disorder? If you have competing multiple chronic conditions, then it’s a matter of relative mortality and quality of life issues for which you and your medical team must determine what is going to provide you with the best quality of life for the longest period of time.
However, the vast majority eating paleo, vegan, 80/20, low-carb, low-fat, raw, FODMAPS, DASH, ketogenic, gluten-free, grapefruit, watermelon, cabbage soup, 3-hour, Atkins, blood-type, fat-flush, low-glycemic, macrobiotic, master cleanse, Shangri-La, skinny bitch (seriously?), South Beach, volumetrics, the Zone, Jenny Craig, Nutrisystem and Weight Watchers, is applying these diets for one reason only: maintaining a weight that is sub-optimal relative to the body’s inherited set point.
Do NOT go Googling the above list of diets. Just don’t. Well unless you want to put the diet in question together with a word like “scam”, or “risk”, or “danger” in the search terms.
And for those who have an activated restrictive eating disorder then the above diets very quickly become safety and avoidance behaviors whereby contemplating eating foods that sit outside the diet will generate severe anxiety and fear.
None of the above listed diets protect health or longevity for otherwise healthy individuals. And only a very few of the above listed diets provide symptom alleviation for unhealthy individuals with existing chronic illness.
And let’s be clear, the chronic illnesses in question are very, very limited as well. If you have kidney disease, coronary heart disease, celiac disease and/or rheumatoid arthritis, then there are clinical trial data to confirm removing/replacing food groups alleviates symptoms.
And let’s also be clear that a restrictive eating disorder is a deadly chronic condition to the point where you and your medical team may not be able to consider removal of foods from your diet when managing your secondary chronic condition (such as rheumatoid arthritis) because it drives progressive restriction and hastens overall disability and early death as a result.
Doctors tend to suggest dietary change and exercise for pretty much any chronic condition out there because, well, other options for appreciable symptom alleviation, remission, or reversal are likely limited to non-existent. But that does not mean that the recommendation is always being offered up from an evidence-based perspective.
Why do medical professionals recommend dietary changes and increased activity for patients with diagnosed chronic conditions in the absence of clinical trial data that confirm it will have any benefit? Because: placebo and locus of control.
Ragen Chastain, in one of her numerous excellent blog posts addressed the logic fallacy of “I know someone who lost weight and they got healthier.”
When people are diagnosed with any chronic condition of any kind, the locus of control has been ripped away from them. We believe, wrongly, that our health is entirely within our control (an internal locus of control).
When we get a diagnosis of cancer (yes, it’s a chronic condition), diabetes, some rare autoimmune disorder, multiple sclerosis, etc. we are hit with the rude awakening that our health status is not exclusively ours to determine or sustain.
When a doctor tells a newly diagnosed patient to eat well and exercise, she is reinforcing something that is very rarely removed from our internal locus of control: what we consume and how we expend energy. It offers a stabilizing counterforce to the destabilizing realization that you are sick when you have very likely done nothing to “deserve” being sick. Feeling as though things are in your control is very powerfully health protective in its own right.
The most powerful placebo of all time is the unshakeable belief that something over which we likely have little to no control is actually in our control.
It is not that doctors are in on some conspiracy to tell you to eat well and exercise because they know full well that it will improve your overall health status just because you believe it will; they offer the recommendation most commonly because they too believe it as well.
But beliefs can be wrong. The very dark side of our societal belief in our health status being exclusively in our control is that if your symptoms do not improve with your dutiful attention to diet and exercise, then what?
For many facing the unpleasant reality that the symptoms are not improving despite dedicated attention to diet and exercise, they simply assume that they have misapplied or overlooked some important detail in diet and exercise. They try a different diet; they change their activities; they eliminate more foods; and they become even more excruciatingly aware of the symptoms in their attempts to identify what combination provides any worsening or improvement at all.
And most never come out of that spiral.
Even more tragically, many with restrictive eating disorders develop secondary chronic conditions as a direct result of restricting food intake, for which they receive a diagnosis and the go-to advice to watch the diet and exercise. And not a single medical practitioner identifies that the entire progression could be reversed were the patient attending to re-feeding and rest (rather than diet and exercise).
I would be writing for the next three decades straight if I attempted to list every chronic condition out there (common to rare) and then dutifully unpacked the clinical trial data to determine whether dietary and activity changes realize any appreciable and relevant improvements in symptom severity and frequency.
So, if you do face a restrictive eating disorder and a secondary diagnosed chronic condition then it is wise to take the diet and exercise recommendation you may receive from your medical practitioner as being suspect and possibly merely an expression to buoy up your own sense of self-efficacy, rather than originating from evidence-based medicine.
Should We Be Masters of Our Health Destinies?
If you review information on mainstream websites regarding breast cancer and race, you will note that African-American women are slightly more likely to die from breast cancer than their white counterparts. This fact is presented as, obviously, a bad thing. On any public information site on the topic, you will note that they indicate that African-American women are 41% more likely to die from breast cancer then their white counterparts [Example Here]
But are the comparators equal? And if they are not, then is that frightening percentage an absolute measurement that an African-American woman is more likely to die from breast cancer than her white counterpart in the U.S.?
White women have a higher incidence of breast cancer than their African-American counterparts. That’s your first hint that things may not be as advertised here.
A greater percentage of white women undergo regular mammogram screening than African-American women. 72% of white women undergo regular mammogram screening, compared to 63 to 68% of African-American, Hispanic, Asian and Native American women [N Chan, 2006].
22.1 white women die of breast cancer per 100,000 people whereas as 30.8 African-American women die of breast cancer per 100,000 [SEER Fact Sheet, National Cancer Institute, Cancer of the Breast, 2011].
But the question is are there 8 less deaths of white women per 100,000 people (when compared to their African-American counterparts) because they have a health locus of control that encourages them to get regular mammogram screening, or are there 8 white women per 100,000 people who were unnecessarily screened, diagnosed and treated for breast cancer or (more worryingly) something cancer-ish like ductal carcinoma in situ (DCIS) who were never going to need treatment because the cancer was never going to be life threatening throughout their entire natural life span in any case?
If it is the latter case and we could somehow remove all those white women who were unnecessarily screened and treated, then we might discover that the mortality rates are exactly the same in both groups.
We cannot know the answer to that because those screening rates are different and therefore misleading, but there has been a huge body of research on the health locus of control of African-American women when compared to their white counterparts, and there are some interesting variations.
“The African-American women were significantly more likely to believe in chance, or to depend on powerful others for their health. Perceived susceptibility to cancer, doubts about the value of early diagnosis, and beliefs about the seriousness of breast cancer all were significantly associated with powerful other scores among African-American women. There was no relation between health beliefs and years of education for African-American women, but for white women, those with the least education were more likely to believe that death was inevitable with a cancer diagnosis”[J Barroso et al., 2000]
Now we could read that quote above by clinging to our dominant cultural attitude that health locus of control must be internal in order for an individual to have the best chance at health and long life, or we could re-evaluate our own attitude instead.
To translate the above excerpt, the researchers identified that African-American women are more likely to believe in both chance and an external locus of control (such as God) when it came to their health and disease outcomes.
Sometimes this approach is referred to as religious fatalism. Yet in both African-Americans and the elderly, religious fatalism was seen as more likely to be in response to chronic conditions and not a predictor of adopting unhealthy behaviors [MD Franklin et al., 2007].
“Results from the multivariate model [examining African-American women dealing with chronic pain] showed that age, depression, physical functioning, and locus of control explained unique variance in pain intensity (44%), suggesting that younger age, reporting more depressive symptoms, limited functional capacity, the belief that one has control over one's health, and the belief that one's health is not controlled by others were significant predictors of greater pain intensity among this sample.”[TA Baker et al., 2008]
For now, tuck away the possibility that, when it comes to chronic conditions, the cultural attitudes we see in the African-American female community in the United States may be worth investigating from a “what if I’m wrong?” stance in our own internal health locus of control culture.
Oh, But the Ethics!
The only forum topic that is almost as dependably capable of sending everyone into the argumentative weeds as is exercise, is the ethical choice of food consumption.
Here is what I had to say a couple of years ago on the topic of choosing veganism for ethical reasons:
“Ethics are difficult to adhere to in a modern world. If someone chooses to adhere to a vegan diet because she presumes that this supports the ethical treatment of animals, then she is also forced to overlook the horrific number of field mice chewed up in combine harvesters for the grain-based diet she consumes. The destruction of wildlife habitat (meaning the death of animals by starvation on a large scale) and the fertilizer run-offs that create aquatic dead zones (like the Gulf of Mexico) are somewhat more severe when the land is cleared for grain production than when it is cleared for cattle grazing. However the production of greenhouse gases is somewhat more severe from land dedicated to cattle grazing than that dedicated to grain production.
It is a very slippery slope to determine that one person's decision to do anything for "ethical reasons" is any more or less ethical than is the next person's decision to do the exact opposite.
From the perspective of our impact as humans on the planet, about the only decision that could arguably be defined as perhaps having a net benefit is to choose not to procreate. We are all hypocritical when it comes to our efforts to make ethical choices and I count myself in that group.”
Fundamentally, the ethics of food choice are irrelevant when it comes to the science of dietary restriction and its health implications on the individual applying such restriction.
It is most certainly your prerogative to adhere to any belief system you wish. And you may do so such that it harms you personally but that you still identify the course of action as being for the greater good. In other words, if you choose to harm yourself with a vegan diet because you believe it allows for the greater benefit of other living creatures on the planet, then namasté.
If you wish to recover from a restrictive eating disorder and maintain belief systems that categorize foods into “good” and “bad”, “ethical” and “unethical”, “healthy” and “unhealthy”, then you likely won’t find a way to balance out the science-based approach of the MinnieMaud recovery guidelines with your need to maintain such belief systems.
Are you wrong to adhere to such categorizations? It is not for me or anyone else to answer that. Your call.
But the rest of this blog post is to those of you who might want to entertain the possibility of being wrong.
Ministering to the Sick and Dying
Imagine the shock a physician must face in our developed nations if she has a patient who actually dies on her watch from a multiple drug-resistant bacterial infection. Of course, there are plenty who are warning that this is going to become a much more common occurrence given that we feed 80% of the world’s antibiotics produced, to our livestock [FDA, 2011] – generating progressive systemic resistance.
For millennia, physicians were shamans— the intercessor between yourself as a time and space-bounded being to your transcendence to beyond space and time (in whichever way your culture defined that transition).
But with several rounds of civilizations rising and falling, physicians became more skilled at trauma. When surgery could be performed in a sterile way, then all manner of deformities and injuries could be manually repaired and the patient could survive the ordeal as well.
Yet still, within living memory, the role of a physician was very commonly to minister to the sick and dying— to comfort them, to alleviate what symptoms could be alleviated, to be present and to try to optimize the possibility that the patient’s own immune system might win out. Infection was an anxious wait and see, external locus of control for physicians of the time.
While there are numerous ancient civilizations who inadvertently used antibiotics, the discovery of how the mechanism of antibiotics worked and how it could be applied, along with the parallel identification that inoculation could prevent disease as well, meant that physicians could now prevent or cure disease.
And yet we are still time and space-bounded creatures who get sick and die. So spectacular was the paradigm of prevention and cure that we keep applying the same framework to chronic conditions.
In the absence of the huge medical and health industrial complex that we built to apply preventions and cures to all those deadly conditions that plagued humanity for millennia, would we have come to a point in our developed nations that we would have felt so satisfied that we would have turned all our attention to ensuring that others around the world enjoyed the same life expectancies as we do?
Clearly that is a rather pointless “what if” question to ask, rather like a mother’s admonishment to her children to eat up because there are starving children in Africa. If you go down the path of trying to identify whose pain and suffering is more or less relevant, you become mired in the unanswerable.
We do not have to feel fabulously first-world lucky when we are suffering, in pain and see no way out. However, we can identify that the framework by which our society addressed trauma and pathogen-borne acute disease in the past is now providing vanishingly small benefit when applied to the reality of the chronic conditions that remain in our midst today.
While we have a good body of scientific evidence that social isolation kills, we struggle to identify why social isolation occurs or how it might generate such poor health outcomes.
One interesting study was able to identify that neither attractiveness nor health behavior differs as a function of social isolation. That means that social isolation did not occur as a result of perceived attractiveness to others nor did it result from poor health behaviors either [JT Cacioppo, LC Hawkley, 2003]. Interestingly these researchers identified that while socially isolated subjects did not report more stressful events when compared to controls, but they did rate those events as more intensely stressful. They were also more likely to report coping passively with stressors and this correlated with slower wound healing, poorer sleep, and greater vascular resistance.
The body of research on social isolation is phenomenally WEIRD. As a refresher, WEIRD is a term coined by Joseph Henrich, Steven Heine and Ara Norenzayan (all University of British Columbia scholars). The term identifies that our understanding of the human mind is predicated on the study of predominantly white and educated subjects from industrialized, rich and democratic countries— basically university students in the West [The Weirdest People in the World: How representative are experimental findings from American university students? What do we really know about human psychology?].
The context of social isolation research is focused on self and not other. Within that context it is assumed that a person with an anxiety disorder becomes socially isolated because he or she is too anxious to engage with others. But I don’t believe it is quite that unilateral in scope.
Does your anxiety alienate others? Yes it could.
Anxiety creates a level of self-absorption, heightened arousal and often irritability that is off-putting to others.
I will quote from my blog post Rebounding to Calm Part I where I am discussing the application of safety behaviors that anxious people will apply to try to alleviate anxiety in social situations:
“Having memorized comments and pre-prepared topics often involves a level of self-monitoring that makes the patient appear either preoccupied or disconnected. Unfortunately, other people often interpret these safety behaviors as a sign that the patient does not like them. This precipitates a cool response from those people that only reinforces the patient’s sense that she is being rejected, when in fact she started the entire cascade with her safety behaviors generating warped social cues.”
The same is true with safety behaviors surrounding food choice.
I Have a Friend…
No, I really do have a friend who is vegetarian for ethical reasons. But the expression of her ethics can be alienating to others at times.
I have another acquaintance and I have no idea why she is vegetarian, she just is. She volunteers no information although I’m sure I could ask and she would explain her reasons. She was also raised a hunter and is an accomplished markswoman.
This acquaintance seamlessly expresses her preference as a footnote equivalent to saying something like: “Yes I have one brother and two sisters”. The former leans towards making a point.
No one wants to be made to feel inferior, and the expression of anxiety can inadvertently make others feel inferior. If you are strident in protecting your safety behaviors, as you necessarily will feel you have to be because they make you feel safe, then you, by extension, may communicate to others who do not apply those behaviors that they lack moral character.
Being immersed in this kind of anxiety will mean you cannot connect with others. You cannot empathize and you cannot moderate your conversation and input to put others at ease. Anxiety makes those of us with the condition brittle and inflexible.
It is the dirty secret of anxiety: you might very well be physically attractive, intellectually sound and able to delay gratification such that you have appropriate health-protective behaviors, yet you fail the basic primate likeability test— you cannot adjust midstream to increase everyone else’s ease.
As such, social isolation may actually develop as an action/reaction alienation spiral. The application of safety behaviors to modulate anxiety generates outward inflexibility, sometimes stridency and often disconcerting social cues. That brittleness and intensity can make the recipient feel rejected and judged. The ensuing cool response and distancing behaviors of the other reinforce rejection back to the originator. And so it goes.
This alienation of others happens inadvertently. No one with anxiety goes out to be off-putting.
Doubt and Certainty
“Our capacity to tolerate error depends on
our capacity to tolerate emotion”
[Irna Gadd, as quoted by Kathryn Schulz, Being Wrong, p 199]
And we all experience times when we are simply a bit too brittle to tolerate being wrong because being wrong is an emotion akin to grieving.
We also each have our own hierarchy that identifies the emotional intensity of one type of error compared to another. Being wrong about where we put down our keys is less fundamentally painful to admit than identifying that we are wrong about foundational concepts and beliefs through which we interpret the world and live our lives.
Schulz frames this scale succinctly as follows: “In this respect, the experience of pure wrongness, although rare, is the telling one: our resistance to error is, in no small part, a resistance to being left alone with too few certainties and too many emotions.”
In this light, it might be possible to view orthorexia as a condition of behaviors that are put in place to ritualize resistance to too few certainties and too many emotions.
There is a reason that the root word of orthorexia is “ortho”: it is Greek for right, true, correct, proper, or straight. If you have figured out the right, correct or true way to eat, then you have a belief system that is beyond doubt.
And I, for one, will not shake that belief system. This entire post is only of any value to someone who senses that on the very edges of that right-eating belief system there resides absolute wrongness— a life shattering kind of wrongness.
If you think you might be wrong, then hopefully this post gives you some tools for deconstructing your current right-eating belief system.
You can apply the practice of scientific inquiry and begin by investigating whether your right-eating diet has experimental results at the pinnacle of the hierarchy of data to prove it really has the value you believe it might have.
You can examine whether you might be using an internal health locus of control that has caught you up in chasing the pink dragon of “maybe more of this, less of that” rather than shifting the locus to an external view that appears to have more value when people face chronic illness.
Additionally, you can engage those around you in honest discussions as to whether your food orthodoxy leaves them feeling alienated and rejected. Perhaps you have a belief that you are disliked or unlikeable when you are wrong, and it is that people feel you do not like them.
You can mull over all the ways in which the medical industrial complex might be influencing everything that you see, read, and hear when it comes to the war on chronic diseases and whether that background noise has completely distorted your sense of what really constitutes responsible health care (by you and on your behalf).
And finally, you can sift through whether your intolerance of error is an intolerance of emotion and you can engage appropriate counseling services to develop increased resilience in those areas too.
“That is why error, even though it sometimes feels like despair, is actually much closer in spirit to hope. We get things wrong because we have an enduring confidence in our own minds; and we face up to that wrongness in the faith that, having learned something, we will get it right the next time.”
[Kathryn Schulz, Being Wrong, pp 238-239]