Why is there an epidemic of mosquito and tick-borne diseases? Because we are all that’s left for them to feed on

There is an epidemic of mosquito and tick borne diseases spreading among humans. Yellow fever is returning, west nile fever has increased, dengue fever has increased 30-fold between 1960 and 2010 and Zika turned into a brain-damaging disease seemingly out of nowhere. There’s also malaria, which most people are well familiar with.

Most of these diseases occur in tropical countries, but Northern latitudes are witnessing their own problem in the form of an explosion of tick borne diseases. There are sixteen known to us, of which four have only been discovered to exist since 2013. One in particular, is spreading rapidly. Crimean-Congo hemorrhagic fever is spreading in South-Eastern Europe and kills anywhere between 10 and 40% of people it infects. Lyme disease is also spreading very rapidly across the developed world and making it dangerous for humans to venture out into nature.

What causes this problem? There are a few important factors. There is more international travel, at faster paces, than ever before in human history. The natural barriers between human communities that would normally prohibit the spread of disease no longer exist. There are also more humans out there. Diseases that would normally drop below the threshold needed for them to survive due to low population density now manage to find sufficient human hosts to proliferate. One study found a minimum population density of roughly five people per square kilometer for malaria to gain a foothold in a particular part of the Amazon.

Most importantly however, for mosquitoes and ticks to be good transmitters of viruses and other pathogens, they have to be exposed to a limited number of species for them to prey on. Low biodiversity means that it becomes easy for a virus or bacteria to jump from one tick or mosquito to his victim, then from his victim to new ticks or mosquitoes. If the animal that’s bitten is too different from the animals that the pathogen normally replicates in, the tick can continue his life cycle by sucking up blood, but the disease can’t proliferate in the victim. As a result, mosquitoes and ticks are born that have been cleansed from the disease. These animals, that put a stop to the life cycle of the pathogen, are known as “dead end hosts”.

It’s important to have a lot of different types of dead end hosts, if we wish to reign in the spread of vector-borne diseases. Large animals, like deer and carnivores, tend to be relatively poor hosts for these diseases. For Lyme disease, the best host is the white-footed mouse. It’s generally found that places with a lot of different types of rodents have far fewer cases of Lyme disease in humans, than places where species diversity is low.

The king of all dead end hosts however, appears to be the humble lizard. One reason for this is thought to be that many lizards are capable of surviving winter with a below-zero body temperature. During this period, the body of the lizard is cleansed of pathogens like Lyme disease. As a result, studies tend to find that ticks found on lizards are much less likely to carry Lyme disease than ticks found elsewhere. Just 1.4% of ticks on lizards in the Netherlands carry Lyme disease, compared to 24% of ticks found in heath and 19% of ticks found in forests. In the United States, a similar effect is found.

There are different perspectives on biodiversity. Deep ecologists would say that non-human life has a right to survive undisturbed in spite of whatever benefits these organisms may have for humans. Unfortunately, such arguments tend not to be particularly persuasive for politicians like Paul LePage, who argues that “Job creation and investment opportunities are being lost because we do not have a fair balance between our economic interests and the need to protect the environment”.

Because the survival of non-human species needs to deliver economic benefits for most policymakers to show interest, more ecologists these days study the impact of biodiversity on disease burdens in humans. Perhaps the realization that a world completely dominated by humans and a few species that happen to escape our destructive impact is also a world where humans are plagued by a growing number of vector-borne diseases could help a few species escape the threat of extinction that looms over them.

Advertenties

An unusual Seneca cliff in Africa

It’s quite well known that the growth in human height in most developed countries has stagnated. This is easy to explain and isn’t particularly worrisome. Human height is a product of underlying genetic potential, as influenced by the environment. As the diet of a nation improves, its people’s adult height should increase, up to a limit, as when all nutrient shortages are addressed, little more gains in height are to be expected.

On the other hand, in recent years it has been observed that despite simultaneous ongoing economic development, in many countries human height is declining again. Most worrisome perhaps is the rapid pace at which human height is declining in these nations. In some countries, decades of progress has been eradicated in a few years.

Consider the case of Nigeria below:

prbdwgx

As can be seen here, by 1996, average male height is back to a level last seen in 1941.

Ethiopia has similarly been affected by a downturn much steeper than the initial rise in height:

l0tdybr

Egypt has similarly seen an enormous decline in height, a typical case of Ugo Bardi’s Seneca cliff:

cnhwt7q

Not all poor nations are affected, China, Brazil and many others are still booking gains in height. It is however, a problem that affects nations collectively home to hundreds of millions of people.

What could explain this mysterious problem? Economists who look at Nigeria or many of the other African nations affected by this problem would see countries that were plagued by economic stagnation for decades, but now seem to be rapidly expanding their economies.

The authors of the study suggested that changing dietary habits may play a role, as well as collapsing health care system. This seems the most likely possibility to me. In particular, it’s worth noting that people in much of Africa depend upon bushmeat as their main source of protein.

As bushmeat is depleted, people are forced fill their diet with other foods. One source of nutrition that’s very popular in Africa is cassava, which is filled with cyanide. Chronic cyanide exposure however is known to cause a decline in growth, as well as cognitive problems. None of these explanations however could possibly explain what’s happening in Egypt. Egypt has no bushmeat, nor do Egyptians eat cassava.

It’s somewhat hard to draw any strong conclusions on what could have led to this sudden and rapid reversal in human height. One factor that may be relevant is the spread of smoking in much of the third world, as it’s known that second hand smoke exposure reduces the growth rate of young children. It’s also possible that a decline in breastfeeding rates played a role.

Another factor that may be important is a rise in vitamin D deficiency. If rural women and children were more active outside, increasing urbanization may have led to an increase in vitamin D deficiency that stunts growth. It’s known that vitamin D deficiency is epidemic throughout the third world.

Finally, the main and most important factor to be considered would have to be be a drastic reduction in the quality of people’s diet. If people’s diet has become less diverse, they would be expected to suffer a variety of nutrient deficiencies that impact their growth. Fresh fruit and vegetables are difficult to store and thus rather expensive. Urbanization may lead people to grow dependent on a much worse diet.

How modern medicine has set us up for a looming cancer epidemic

Last year I wrote an article that pointed out that viruses are not per definition malignant. Many of the viral pathogens to which we are exposed should instead be seen as symbiotic organisms, that perform important functions for our health. The effects they cause include a long-lasting protection against cancer.

Unfortunately, it is easier for us to detect the acute effects of a viral infection than to detect the long-term role that viral infection plays in our health. As a result, modern medicine has led us to make a number of decisions that are ultimately detrimental to our health.

I recently encountered a study that provides a long list of cases where childhood natural infections with diseases we now vaccinate against was associated with a reduced risk of cancer. Even viruses like Influenza are mentioned in the literature as playing an important role in “teaching” the body how to cope with future cells that display abnormal proliferation.

The whole relevant excerpt is posted below:

The idea of febrile infections conferring protection against the development of tumors began to develop in the late 19th and early 20th century. Initially, anecdotal evidence based on physicians’ observations and examination of medical histories of patients suggested beneficial effects of infections on cancer risk.

More formalized data collection followed and in 1912 a study demonstrated that United States and Canadian Native American populations that had six times higher mortality rate from infectious disease had a lower rate of mortality from cancer (74).

In 1916, another study examined mortality rates in New York, Boston, Philadelphia, and New Orleans from 1864-1888 and 1889-1913. Over these two time periods, deaths from infectious disease decreased while simultaneously cancer death rates increased by over 55%.

In more contemporary times, a study examined the relationship between cancer and infectious disease rates in Italy (75). The authors proposed that there were four factors to consider when investigating possible causality in this relationship: 1. Consistency and strong associations among studies, 2. Temporality of the association, 3. True plausibility when considering the field of research, and 4. A dose response relationship between agent and disease.

According to them, temporality was the 12 most important aspect in determining association. With this as a guide, they attempted to resolve the conflict between evidence that infections cause cancer versus evidence that infections can prevent cancer.

Establishing a stricter temporal relationship between observations of a decrease in infections in Italy and observation of an increased cancer rate confirmed that in the first half of the 20th century the cancer increase indeed followed the decrease in infectious disease rates (75).

As evidence began to mount in favor of infections having a beneficial effect on preventing cancer, so did anecdotal evidence that concurrent febrile infection in cancer patients led to a better cancer control. Dr. William Coley was the first to administer infectious agents to a substantial number of cancer patients.

Coley administered a mixture of heat killed Streptococcus pyogenes and Serratia marrescens to late-stage sarcoma patients. This mixture, known as Coley’s Toxins, lead to cancer remissions in many sarcoma patients (76).

Even today, infection of the bladder with Mycobacterium bovis Bacillus Calmette-Guerin (BCG) is the treatment of choice for patients with non-muscle invasive bladder cancer (77). These bacterial infectious agents are thought to act as adjuvants enhancing ongoing anti-tumor immune responses.

While this may be the case, infections might play other roles in the development of anti-tumor immune responses. These novel roles of infections will be investigated later in this dissertation. In exploring relationships between childhood infections and cancer, the majority of studies focused on influenza, measles, mumps, pertussis/whooping cough, chicken pox, scarlet fever, rubella, and diphtheria.

A study done in 1966 was designed to examine the relationship between exposure to X-ray radiation and hormonal therapy and increasing mortality rates from ovarian cancer. 97 patients with ovarian cancer and 97 controls with benign ovarian tumors were recruited and a thorough medical history was obtained.

In the final analysis, X-ray radiation and hormone therapy were not found to have an impact on ovarian cancer mortality rates. Unexpectedly, 13 however, individuals who had reported a past history of mumps parotitis had a significantly reduced risk of ovarian cancer (p=0.0007)(78).

This study was the first to suggest that mumps infection could lead to lower incidence of ovarian cancer. The mechanism, however, remained unknown until Cramer et al proposed an immune mechanism in 2010 (62).

In 1977, another study addressed possible causes for the rising incidence of ovarian cancer. 300 women with laparotomy-confirmed ovarian cancer were recruited in 17 medical centers.

They were administered a questionnaire that included inquiries on X-ray exposure, pregnancies, smoking, chronic illness, history of acute infections, contraception use and prior surgeries, among others. The study included two control groups: 1. Gynecological patients in the hospital for reasons other than suspected ovarian cysts or tumors and 2. A group of women in the community provided from a list generated by general practitioners. These two control groups were provided with the same questionnaire as the case group.

When the data was compiled, many women diagnosed with ovarian cancer reported fewer past bouts of mumps, measles, chicken pox, and rubella. The study also displayed a statistically significantly reduced relative risk of developing ovarian cancer in the second control group among women who had a history of measles (Relative Risk (RR)=0.47), mumps (RR=0.61), rubella (RR=0.62), and chicken pox (RR=0.66) (79).

A recent retrospective review confirmed the outcome of reduced risk for ovarian cancer correlating with those four infections (74). Kömel et al. performed a case control study in 1992 in support of his hypothesis that childhood and/or adulthood febrile infections influenced a higher risk of melanoma.

The study was carried out at the University of Göttingen on 139 newly diagnosed melanoma patients whose primary tumor was removed between January 1988 and September 1991. The study also included 271 controls recruited from a pool of Ophthalmology and Dermatology clinic patients whose 14 diseases were not related to malignant melanoma.

Both cases and controls were age and gender matched. These groups were administered questionnaires to collect information on frequency and temperature of febrile infections experienced as well as questions on childhood, adulthood infections, and common infections within the 5 years of melanoma diagnosis.

Group one in the study consisted of individuals who had one or more of the following: measles, mumps, rubella, chicken pox, scarlet fever, diphtheria, whooping, and tonsillectomy performed prior to the age of 13 (assumed to have tonsillitis).

Group two were individuals who had infections in adulthood associated with fever that included hepatitis, tuberculosis, erysipelas, chronic infectious diseases, febrile abscess, furunculosis, wound infections, tropical diseases, and other fever producing diseases.

Common diseases accompanied by a fever within the five years of melanoma diagnosis (group three) included gastroenteritis, influenza/common cold, pneumonia, herpes simplex virus 1, and “other trivial diseases” associated with fever. Individuals in Group one were not found to have statistically significantly reduced risk of melanoma. However, Group two with the history of chronic infectious diseases (adjusted Odds Ratio (OR)=0.32) and febrile abscesses/furunculosis/wound infections (adjusted OR=0.21), were found to have a significantly reduced risk for melanoma.

Individuals in group three, particularly with a history of influenza/common cold (adjusted OR=0.32) and HSV1 (adjusted OR=0.45) also had significant risk reduction for melanoma. There was also evidence of the cumulative effects of multiple febrile infections compared to never experiencing a serious febrile infection. The study showed that more fevers experienced in group two (p=0.01) and group three (p=0.0001) translated to a significant risk reduction.

Although the calculated risk for melanoma was not significantly reduced in group one, childhood diseases such as chicken pox, measles and mumps had odds ratios <1 indicating a possible reduction of melanoma risk (80). 15 A repeat of this study on a larger number of cases included 603 melanoma patients from 11 medical centers linked via the European Organization for Research and Treatment of Cancer (EORTC) Melanoma Cooperative Group.

The control group included 627 individuals who were age matched and selected from the same neighborhoods of the melanoma cases. Individuals in this study were grouped according to a history of severe diseases (group one- hepatitis, tuberculosis, Staphylococcus aureus infections, urinary tract infections, sepsis, meningitis, rheumatic fever, cholecystitis, and erysipelas), and less severe infections suffered five years prior to primary tumor removal (group two- influenza, infectious enteritis, bronchitis, pneumonia, and HSV).

Results from this study confirmed the relationship between strong febrile infections accompanied by a body temperature above 38.5°C, and risk reduction for melanoma (81). Association between gliomas and Varicella-Zoster Virus (VZV) was examined in the San Francisco Bay Area Glioma Study. Individuals were questioned on their shingles and chicken pox histories.

These diseases caused by VZV were of particular interest for a link to glioma due to the virus having neurological sequela. The group found that glioma patients were less likely to have had a history of shingles or chicken pox compared to the control group. In order to have a more reliable proof of infection than self-reporting, a repeat of this study was performed on newly diagnosed glioma patients from whom the self-reported history of having chicken pox was obtained and confirmed serologically by positivity for anti-VZV antibodies.

Data obtained on 462 glioma subjects and 443 controls matched for age, sex, and ethnicity demonstrated a reduced risk for developing glioma in individuals with a history of VZV (OR=0.4). The importance of having a more reliable measure than self-reporting, such as serologic markers for this type of studies was justified by the difference in this study between self-reported chicken pox infections and 16 seropositivity for VZV. Individuals who were IgG seropositive for VZV had a reduced risk (OR=0.6) for having a glioma (82).

Albonico et al. studied a cohort of cancer patients with solid epithelial tumors diagnosed by 35 general practitioners in Switzerland, matched them according to age, gender, and physician with a recruitment limit of 20 patients/physicians office. A total of 410 patients were administered questionnaires that requested information on age, gender, history of febrile childhood infections (in this study defined as measles, mumps, scarlet fever, pertussis, rubella, and chicken pox), and frequency of other infections that resulted in fever >39°C prior to the age of 21.

Additionally, when answering questions specifically on febrile childhood infections; answering options were limited to “yes”, “uncertain yes,” “uncertain no,” and “no” in order to reduce recall/memory bias. Almost 50% were breast cancer patients and therefore the final analysis was carried out on breast cancers vs. non-breast cancers. Analysis was also divided according to age ≤60 vs. >60. Ultimately the study found a statistically significant reduction of risk of solid malignancy in individuals who had a history of chickenpox (p=0.044) or rubella (p=0.0003) (83).

The risk was further lowered with the increased number of infections. Curiously these results were obtained only in the 50% of individuals who had cancers other than breast cancer. Cancer risk reduction due to the history of febrile diseases did not hold for breast cancer patients, leading the authors to suggest that beneficial protection provided by childhood infections might be body site specific. Additional studies on patients with many different cancers came up with similar results (74, 83, 84).

One case-control study demonstrated that chicken pox and pertussis infections lowered risk for stomach, breast, colorectal, and ovarian cancer (84). Furthermore, an increase in frequency of cold and flu infections experienced also decreased risk for these cancers. We found only two studies that obtained opposite results (74, 85). Chickenpox (OR=2.09) and mumps 17 (OR=2.61) were shown to increase the risk of cancer.

As a result, Hoffman et al proclaimed that ‘no final statement could be made on the association of childhood diseases or fever and cancer should be made.’ It is difficult to compare these two studies with the majority of studies described above and come up with a reasonable explanation for the different outcome. However the majority of studies published before and after Hoffman et al. findings support the link of a history of infections to decreased cancer risk.

Not all studies examining the relationship between infections and cancer deal with solid malignancies. Increasingly reports and studies are demonstrating an increased risk for acute lymphoblastic leukemia (ALL) with decreased exposure to childhood infections.

Currently no specific pathogen has been implicated in lowering the risk of ALL development. Evidence continues to point towards a ‘delayed infection hypothesis’- that ALL risk increases with the delay in exposure to certain infections early in life. The study by Urayama et al. looked at different indicators in addition to infection in order to firmly establish that it is the early childhood infections that lower the risk of ALL.

Specifically the study examined simultaneous effects of: 1. Birth order, 2. Day-care attendance, and 3. Common childhood infections. Subjects between the ages of 1-14 were enrolled in the Northern California Childhood Leukemia Study (NCCLS) conducted from 1995 to 2008.

Approximately, 669 ALL subjects (284 non-Hispanic whites and 385 Hispanics) and 977 controls (458 non-Hispanic whites and 519 Hispanics) were selected to address the relationship to socio-demographic differences. ALL positive status was defined as a diagnosis of CD10+CD19+ALL between the ages of two to five.

Cases and controls were compared separately for each ethnicity due to inherent differences in daycare utilization and family size. The group found that non-Hispanic white children who attended daycare by six months of age (p=0.046) and who had one or more, older siblings 18 (p=0.004) had lower risk for ALL. Also both Hispanic (OR: 0.48 [0.27-0.83]) and non-Hispanic white (OR: 0.39 [0.17-0.91]) children had a decreased odds ratio for having ALL if they had an ear infection before the age of six months (86).

Additional social contact measures in Hispanic children did not demonstrate decreased risk for ALL. Other studies examining the traditional measures of childhood infections (measles, chickenpox, mumps, rubella, and pertussis) were also found to play a protective role against the development of other non-solid malignancies such AML and CLL in adulthood (87).

Anic et al. utilized similar indirect measures of infection risk with a study on adult glioma risk (88). This study examined glioma risk with birth order, family size, birth weight, season of birth, and breast-feeding history.

The study recruited 889 adult glioma patients 18 years or older that were less than 3 months from glioma resection. All glioma case participants were recruited from neurosurgery and oncology clinics from southeastern universities and cancer center. About 903 non-blood related brain tumor free individuals from the same communities were used as case controls.

Individuals were subjected to interviewer questionnaires to collect data on cancer risk. Anic et al. found that individuals who had any siblings (OR = 0.64; p = 0.020) or older siblings (OR = 0.75; p = 0.004) were at a lower risk of developing a glioma (88). All other risk factors tested were not significant.

Another study specifically dealing with Non-Hodgkin’s Lymphoma (NHL), investigated the relationship between cancer and direct (i.e. self reported infection exposure) or indirect (i.e. family size, birth order, day care attendance etc.) measures of exposure to infection.

NHL rates were rising by 3-4% in the developed world and it was hypothesized that this increase could be due to delayed infections leading to immune dysregulation (89). To test this, 1388 NHL patients, 354 Hodgkin’s Lymphoma patients, and 1718 healthy controls were recruited in Italy and 19 questioned on their family size and history of acute and chronic infectious diseases as well as autoimmune disease.

This particular study found that individuals were at an increased risk for NHL if exposure to their first bacterial or viral infection was delayed to after the age of four. Smaller family size also appeared to be greater risk factor for NHL (89).

Contracting a live pathogen is not the only way of providing protections against cancer. Vaccination with attenuated pathogens may also lower cancer risk. One group conducted a study on 542 malignant melanoma patients assessing the effect of vaccination practices on survival (90).

The particular childhood vaccinations of interest, BCG and Vaccinia, were common until the late 1970s and 1980s. Although these patients already were dealing with cancer, valuable information was gathered. According to Kaplan-Meier analysis, melanoma patients immunized as children with BCG and/or Vaccinia survived much longer than their unvaccinated counterparts.

At five years following malignant melanoma diagnosis, vaccinated group’s survival was about 75% compared to approximately 50% in the unvaccinated group. Whether BCG and Vaccinia vaccination were considered separately or jointly, the hazard ratio for death in melanoma patients was decreased compared to unvaccinated patients.

The study also analyzed the relationship between the number of reported bouts of infection and length of survival. As number of infections (that included osteomyelitis, mastitis, abscess, or furuncle) increased, the hazard ratio decreased yielding a significant difference in survival irrespective of whether the infections were accompanied by elevated temperature (p-value=0.004).

In light of the above evidence, it should be clear that we need to act with more prudence when it comes to vaccination. It’s now recommended in the United States for everyone aged six months and over to be vaccinated against influenza on a yearly basis, even though the vast majority of influenza deaths occur in people aged 65 and over. It’s questionable whether the benefits really outweigh the potential harm for many demgraphics.

I should note that although skepticism of vaccination is justified, it’s important to avoid falling into the trap that depicts vaccination as being part of some sort of conspiratorial plot against the general population. This discredits the critic and represents an unfalsifiable accusation.

A far better explanation would be to recognize vaccination as a case where people are blinded by their strong faith in their ability to control and improve upon nature. The idea that viruses might not be merely some sort of cosmological burden placed upon organisms, but actually play an important role in regulating a variety of processes in a manner that benefits their host never occurs to most people.

It’s worth noting that American culture is quite unique in its excessive faith placed upon modern medicine. There is no other country where per capita healthcare expenses are so high and where people are exposed to such a large vaccination schedule.

Having studied in the Netherlands myself, we were exposed to literature that was relatively criticial of the supposed accomplishments of modern medicine. It was explained for example, that our increase in life expectancy is largely the result of improved hygiene and nutrition.

In much of Europe, doctors tend to be more reserved and cautious in their treatment of patients. In Germany for example, St John’s wort is the most commonly used treatment for depression. In the United States on the other hand, Cymbalta is most preferred, despite the risk of suicide it carries.

Similarly, the United States accounts for 80% of global Ritalin consumption. Is something uniquely wrong with American children, or are Americans unique in their idea that every mild abberation from the norm needs to be treated with modern pharmaceutical medicine?

It’s generally accepted in polite company to raise these questions about the American preoccupation with treating every imaginable health conditions, but vaccination in contrast remains a sacred cow. If you were to suggest that there can be such a thing as “excessive vaccination”, you’re commonly interpreted as some sort of anti-intellectual.

In fact, as the evidence above would suggest, we have reason to be very skeptical of vaccination as vaccines may have caused more harm than they solved. Most of the decline in deaths from vaccine preventable illnesses occured before the vaccines were introduced. Thus, vaccines prohibited the observed effect of reduced cancer incidence after natural infection among billions of people, to prevent a very small minority of deaths that could not have been prevented through means other than the complete eradication of these viruses.

The disturbing prospect that now awaits us as vaccination gradually eliminates every childhood febrile infection is that a generation of young people is currently growing up whose immune systems are ill-prepared for the potential development of cancer later in their lives. In fact, as the study I quoted itself suggests, current cancer rates already appear to be far higher than they would be if we would simply tolerate some of the infections that regularly occur in children. Perhaps next time you see someone sneezing, you’ll say “bless you” for entirely different reasons.

Why a return to small-scale farming isn’t going to happen

A common trope in peak oil circles and other communities skeptical of industrial society’s long-term future is the assumption that the trouble society faces due to oil depletion will be addressed by a return to small-scale farms. Many people have embarked on permaculture projects, others have fled to the countryside to sit out the coming collapse of modern civilization.

The idea should be simple to understand: People need to eat to survive, everything else is secondary. Thus when all the excess layers of complexity enabled by fossil fuels are stripped away, we will be left with a situation where agriculture is the main potential source of employment. In addition to this, the fact that industrial agriculture depends on oil means that lack of oil will make methods of farming that were rendered non-viable by new technologies viable again.

I am skeptical of this suggestion for a number of reasons that I will aim to outline in this essay. The first issue I want to address is the fact that capitalism doesn’t per definition assign a fair market value to products that adequately conveys the underlying issues involved in their production.

Anyone who looks at fossil fuels should understand the issues involved here. The first problem we notice is that fossil fuels are sold at a price that doesn’t allow us to address the damage caused to our environment. The second issue we notice is that fossil fuels are not even sold to us at prices that cover the costs that companies incur to extract them.

As Gail Tverberg and others have noted, the limits to growth we have encountered express themselves in the form of low oil prices, rather than high oil prices. This is caused by the fact that consumers can no longer afford to pay high prices due to debt limits. As you might have noticed in the years since 2008, you’re not spending a significantly larger share of your income on food. In fact, as of speaking in 2016, world food prices are very low again. What has primarily risen in cost are insurance, college tuition and rent.

Societies where a large section of the population works in agriculture have one thing in common: The vast majority of people’s income is spent on food. In our society, regardless of whatever decline in standard of living we might have experienced, most of our income in the developed world isn’t spent on food, just roughly ten percent is.

It’s important to note, that most of this doesn’t end up in the hands of the farmer who grew the crop. A disproportionate amount of that share ends up spent by restaurants. Another large share of the price ends up distributed to the variety of participants in the logistical chain that leads to food entering the supermarket.

The big secret of small scale organic farms is that they’re economically non-viable. They’re kept alive through government subsidies on one hand and on the other hand, through consumer lifestyle choices. People are willing to pay a lot more for a product that can be marketed to them as being a “responsible decision”.

But what happens, when people end up in a situation of economic scarcity? As I will explain to you, the consequence of this will be that agriculture will merely become even more mechanized. The products that are cheap and affordable for us, are the same products that tend to require relatively little physical labor. The reason for this is because employees are expensive.

What makes employees expensive? A variety of factors. Employees are susceptible to accidents and sickness, that lead to costly medical expenses. Employees have to be paid regularly, even if circumstances prohibit you from making effective use of their labor. A hailstorm in the Netherlands recently confronted farmers with this problem. Their greenhouses had been destroyed, so they found themselves confronted with a situation where they had to continue paying their employees, despite being unable to make use of their labor.

Employees are also unpredictable. They could go on strike, they could make errors, they could become sick, they could steal from their employer or they could sue their employer for exposing them to conditions that render them infertile and give them cancer. For farmers, employees are a massive burden, one that they would happily get rid of if they could.

There are very few conditions imaginable that would render it financially attractive for a company to replace a machine with laborers. If we run out of oil, phosphate rock, water, fertile soil and other essentials to grow crops, companies won’t respond by hiring more laborers. Rather, what will happen is that companies will pass on the costs to consumers.

How will consumers react when costs are passed on to them? Consumers will start to cut down further on their food expenses. What this means is that they will cease to go out to dine as often, as this is the easiest place to cut down on expenses. Another option for them is to substitute more expensive food items with less expensive food items.

How could consumers cut down on their food expenses? One option will be for them to let go of their picky lifestyle choices. To eat only the flesh of animals who had a good life and saw sunlight in their lifetime is a decision that people can make who have excess money to spend. Similarly, to eat organic foods grown without the use of fertilizer and pesticides requires a position of choice that few people will have as their budgets are reduced. Organic foods require roughly twice as much labor input and as a result are more costly.

One important problem to understand when it comes to how agriculture will change is that the diversity of crops we eat today would amaze people who lived just a century or two ago, in a time when our diet was almost entirely dominated by cereals. Grain is cheap and simple to grow. As our medieval ancestors developed improved agricultural methods and saw their population grow, the dominance of grain in our diet merely grew. Between the eight and eleventh century, cereals grew from a third of our diet to roughly three quarters of a typical person’s diet.

By the 18th century, the European diet was almost completely composed of grains. Then, by the late 19th century, wheat yields began growing faster than the rate of population growth. As a result, per capita staple food consumption eventually reached a peak. Something unique happened: Regular people were left again with disposable income that they could begin spending on different food items that had been until then affordable to them and reserved for the upper classes: Meat, fruit, vegetables, chocolate, coffee, tea, etcetera.

Our health has improved drastically as a consequence of our reduced reliance on grains alone. Scurvy has practically disappeared, as have rickets and other disorders caused by our poor diet, which led us in much of Europe to have a lower life expectancy than hunter-gatherers until well into the 19th century. In the poorest regions in the world however, most people still have diets that consist for 80% or more of cereal grains.

The problem, as you might have anticipated, is the fact that the new foods we added or re-added to our diet, are luxury goods. We eat them, not because they are a cost-effective method to ensure our survival, but because they improve our quality of life. For a hunter-gatherer, these two motives roughly align. The foods he happens to have access to will also typically be foods that are relatively healthy for him.

For those of us living in civilization, these two motives have been opposed to each other for hundreds of years: What’s cheap and easy for us to produce (wheat, rice, corn, potatoes) is not what happens to be most healthy for us. The real risk we face now, if our standard of living continues to decline at its current pace, is a return to the type of diet we left behind. The free range bison meat and organically grown pesticide free blueberries you buy at Whole Foods don’t inherit the future, the big sack of potatoes and industrially produced white bread you buy for a fraction of the price will.

It’s important to understand that for the staple crops we eat, the advantages that industrial agriculture happens to have over any more sustainable forms of agriculture are much larger than the advantages it has when it comes to other crops. These advantages are unlikely to disappear anytime soon.

As a result, collapse doesn’t lead to a break in the trend of industrial food production. It means a continuation, even as people begin to discover the major problems associated with it. We will merely grow more dependent on modern industrial agriculture. The scale advantages that it happens to have are too big for us to overcome.

What we should expect to see is that it will become increasingly difficult for people to live far away from major urban population centers. Many small rural towns and island communities are already dependent on government subsidies to keep them alive and in most of Europe, the countryside is being deserted. Hyper-urbanization is part of the process of collapse that we will witness in the decades ahead.

This is unfortunate, because most of the critique aimed at industrial agriculture is of course justified. It is destructive in nature and tolerates very little biodiversity in its surroundings. Many people would also rather maintain their traditional way of life, rather than being reduced to passive consumerism. The only unjustified critique is the idea that we have a viable alternative.

The other side of the coin would be that ecological damage will be relatively limited if urbanization continues. The decline in subsistence farmers and the increase in rural flight towards the cities has opened up vast swathes of land that were formerly under cultivation. Today there are large plots of land where trees are growing in new forests that were formerly used for subsistence agriculture. Whatever life manages to survive outside of the cities may face less competition from us.

On the value of economic self-reliance: The green case for a Brexit

If there’s one thing I struggle to understand, it is how the green party reconciles its decision to support the continued membership of the United Kingdom in the European Union with its emphasis on the need to address climate change. As Baroness Jenny Jones has pointed out as a lone exception to the “green” consensus, British membership is merely going to exacerbate the underlying problems that have become the driving forces behind climate change. For us to successfully address climate change, the idea of free trade as an unequivocal good has to be challenged.

Gail Tverberg and others have noted that the established solutions to climate change do not appear to work. We have seen many ecological problems before, where the issue could be addressed through substitution and technological innovation. Ozone depletion was addressed by an international agreement to phase out CFC’s, addressing the buildup of persistent organic pollutants was similarly agreed upon through international agreements.

In the case of climate change however, things have worked differently. Climate change is a problem that is far more pervasive and intrinsic to the way our economy happens to function. The industrial revolution that led to the drastic rise in our standard of living was made possible by the exploitation of fossil fuels. To maintain our way of life in the absence of the fossil fuels that gave birth to it will be a gargantuan task.

Most efforts to reign in the growth in carbon emissions have so far proved futile. In the presence of free trade agreements, rising manufacturing costs have led to the shift of industry away from Europe, towards countries like China, where energy prices are still low and regulation is very limited. As a result, the decline in emissions in Europe has been more than compensated for by the rise of emissions in China.

As a result of the European Union’s freedom of movement, some efforts by nations to reduce their own carbon emissions have proved nonviable. In Germany, the introduction of an aviation tax led passengers to simply use airports in nations that border Germany. A similar attempt by the Netherlands to introduce an aviation tax similarly led to passengers flying in Germany or Belgium.

In the Netherlands, the decision to raise petrol taxes led to a sharp rise in people who bought their fuel just across the border, leading to an unexpected budget shortfall. You might argue that these were simply foolish policies, but they illustrate an important principle: The current division of power is inefficient. It renders nation states impotent and the likely outcome is thus that the EU will be forced to draw ever more power towards itself.

We live in a globalized world, where every country can freely trade with every other country and where companies will soon be able thanks to TTIP to sue European nations that happen to hurt their interest due to environmental legislation that makes their business model less competitive. As a result, it becomes increasingly difficult for nations to protect the environment. We are stuck in a race to the bottom, where companies will simply shift their energy intensive processes to whatever nation still happens to have cheap fossil fuels.

You might assume that the rise in renewable energy will address these issues. If solar power were to become cheaper than coal, companies would have no more incentive to move their manufacturing base from countries like Denmark and Germany to countries like India and China that still use dirty energy in large amounts. Thus we’d have no reason to challenge free trade agreements.

There are however reasons to be very skeptical of this suggestion. Certain processes appear highly dependent on fossil fuels. Even if we accept the suggestion that renewable electricity will soon be cheaper than fossil fuels (a big if), we have to consider the fact that coal is used not just to generate electricity, but also to manufacture steel as well as a variety of other processes where intense heat is needed. Renewable electricity can compete with fossil fuels because fossil fuels waste most of their heat in the process of generating electricity. When the end-product needed is heat rather than electricity, fossil fuels have a strong competitive advantage.

More importantly perhaps, international trade itself is a process that seems impossible for us to render carbon neutral. There will never be cargo airplanes powered by solar power, as the batteries simply can’t store the massive amounts of energy needed for these planes to lift off the ground. Air transport is instead looking at biofuel to become carbon neutral, which means that we will end up massively increasing our land use if we wish to maintain the growth in air transport.

The transportation of goods by trucks might be physically possible to do with electric vehicles, but it’s difficult to envision this ever becoming economically competitive with trucks that run on diesel. Similarly, we run into trouble when we envision carbon neutral merchant ships. Forty percent of the world’s merchant ships are registered in Liberia, the Marshall islands and Panama, because these nations place few regulatory burdens on the merchants. How are you ever going to enforce a target of zero carbon emissions on such an industry?

This is all nice and well, you might argue, but isn’t our economy reliant upon international free trade by now? This is correct, but the degree to which we will rely on international free trade in the future is a choice we make ourselves. To a large degree, the trade that happens between Britain and other nations is excessive.

Note for example, that Britain imports 61,400 tonnes of poultry meat a year from the Netherlands and exports 33,100 tonnes to the Netherlands. Britain also imports 240,000 tonnes of pork and 125,000 tonnes of lamb while exporting 195,000 tonnes of pork and 102,000 tonnes of lamb.

It’s quite clear that the suspension of trade barriers has lead to the unnecessary and wasteful transport of a variety of goods between different nations. But why does it happen? Even if trade between nations is free, wouldn’t the invisible hand of the free market end up limiting excessive transport to a minimum?

There are a number of issues intrinsic to the European Union that might contribute to this. For example, the VAT system of taxation encourages carousel fraud, where fraudsters earn an estimated 50 billion Euro per year, by exporting products across European borders and receiving VAT back they never paid in the first place. As a result, some products in the EU are transported across borders for the sole purpose of tax fraud.

Similarly, to protect certain long established industries, the EU only allows certain regions to use particular terms for their food products. The EU only allows cheese to be called Feta cheese if it happens to be produced in Greece, while Parmesan cheese has to be produced in the region of Parma.

As a result of this, the main way for food producers to earn money becomes to produce a product with a protected status, as you don’t face competition from outsiders who might be able to produce the product too. The limited number of producers restricted to a particular geographical region also enables the creation of economic cartels that keep prices artificially high.

There are of course arguments to be made in favor of this policy, but in practice the combination of free trade between European nations and the regional protection of food items, leads to the excessive shipping of products across nations that could just as easily be produced domestically.

We can choose to grow more dependent on the import of products from foreign nations, or we can attempt to maintain the natural economic barriers that existed for centuries before we became dependent upon fossil fuels. This would also have the effect of rendering us less vulnerable, should international trade happen to break down for whatever possible reason.

It’s not hard to envision how the consequences of free trade and increased reliance upon foreign nations could come back to haunt us. Saudi Arabia, the world’s second largest oil producer, is a politically unstable nation. It’s bordered by two nations that have large swathes of territory governed by Al Qaeda and Islamic State and also happens to have a domestic religious minority of Shia Muslims, who happen to live on top of most of the oil fields. If Saudi Arabia ever goes the same way as Venezuela or Libya did, it’s easy to see that there would be international consequences.

Similarly, the Suez Canal lies next to territory home to terrorist groups that have successfully taken down airplanes. At some places the canal is a mere 300 meters wide, so it’s not hard to see how domestic instability could shut off the canal, causing enormous global disruptions in the free exchange of goods.

The more complex a system becomes, the more potential points of failures emerge. Free international trade has turned into an immensely complex system, host to a variety of factors that could lead to cascading failures, many of which are difficult for us to anticipate.

An exit from the European Union today would provide Britain with the opportunity to reduce its economic reliance on other nations. If an exit does not occur this year, there probably won’t be another opportunity for many years to come. In the meantime, the global economy and the European Economic Area would grow increasingly interconnected, ensuring that if the United Kingdom were to leave in the future, the economic consequences would be much bigger.

To me it is clear that the best solution is for the United Kingdom to make use of this once in a lifetime opportunity to leave the European Union. Leaving the European Union would help prepare Britain for the inevitable era of economic contraction that is ahead when the global economy runs into finite limits. The limited resources we have on our planet would be used in a less wasteful and more efficient manner again.

Perhaps most important of all however, Britain’s political borders were shaped by the geographical boundaries that an island has. Boundaries exist in nature for a reason, they contribute to diversity and create isolated communities that can survive even as other communities are in turmoil. This phenomenon occurs at every level, from the appendix that hosts bacteria that recolonize the intestinal tract after disruption, to islands like Socotra where species survive that have gone extinct everywhere else.

Nature never puts all of its eggs in one basket, nor should we. As we all know today, Britain’s isolation ultimately proved to be to everyone’s benefit when previous utopian experiments imploded and Britain was able to help reestablish order on the mainland of Europe.

Why India will never become an industrialized nation

It’s easy for people to underestimate the effect that environmental conditions have on a nation’s economic development. In the case of India, these environmental conditions prohibit the nation from ever experiencing the type of economic boom we have seen in nations like China and South Korea.

There is no reason whatsoever for us to assume that a pattern that played out in currently industrialized nations will per definition repeat itself in other nations. Many nations are likely never to experience the type of carbon intensive lifestyle currently seen in the developed world.

The argument frequently heard from right-wing politicians in the US and Europe, that it doesn’t matter what they do to address global warming because India and China are just going to continue spewing out carbon dioxide miss an important point: India is never going to industrialize. How much carbon dioxide is released into the atmosphere will ultimately come to depend mostly on the decisions made in countries that are currently developed.

Some of the problems India is bound to run into that will inevitably prohibit it from industrializing are as following:

Peak coal

India simply doesn’t have the big coal reserves that other nations do. Coal India Limited is a state owned corporation responsible for 80% of India’s coal production. Based on its current estimated coal reserves, the growth rate the company aims for means that it would exhaust its reserves in 14 years, as of this moment.

Note that currently estimated reserves don’t consider the problem that not all reserves can be accessed. India has lignite reserves, located in the middle of the Rajasthan desert. Lignite can’t be transported far from the place where it’s mined, due to its low energy content. Other reserves are located beneath valuable farmland and densely populated areas. Keep in mind that the entire nation of India has a population density roughly as high as the Netherlands, universally understood to be an overpopulated nation.

Ozone pollution

India’s crop yield is decimated by ozone pollution from cars and industrial processes. It’s estimated that ozone pollution as of 2014 caused India to lose 9.2% of its yield every year. Most ground-level ozone pollution can be traced back to vehicle transport. India in 2011 had 13 cars per 1000 people, compared to the 500-800 per 1000 range that’s typical of industrialized nations.

Ozone formation is a strongly temperature dependent process, that rises non-linearly as temperatures increase. This is why the impact of ozone pollution in India on crop yields is so much higher than in other nations. As temperatures in India continue to rise, ozone pollution is set to grow much worse.

As a result of climate change, India is expected to see a strong rise in stagnant air days by up to forty per year, which ensure that the pollution generated by fossil fuel combustion isn’t blown out over the ocean. As a consequence, Indian citizens will suffer the health effects and Indian crops will suffer reduced yields. To have as many cars driving around in India as in Europe and North America would be a very bad idea.

Thermal pollution and water shortages

To exploit coal and natural gas requires the use of water to drive steam turbines. In most Western nations, water shortages are not a big issue. In India however, water is scarce and is about to become even more scarce. India has some of the world’s most rapidly depleted aquifers. The Indus river mostly depends on meltwater from the Himalaya.

If India can not find the cool fresh water it needs to drive its steam turbines, the rapid rise in coal and natural gas use will prove to be unsustainable. As a consequence, electricity use would have to be rationed. The country’s thermal power plants already draw in over half of the country’s total water use. A water shortage will thus inevitably also mean an energy shortage.

The bottom line

The IEA thinks China’s coal use has permanently peaked in 2013, while its carbon emissions peaked in 2014. Europe’s emissions have also peaked years ago and are now rapidly declining. If India’s industrial development is inevitably constrained by its own situation, the outcome of this crisis will largely depend on nations like the United States. Dodging responsibility by pointing to China and India is not a viable argument, as decisive action in the United states could strongly reduce the total cumulative emissions our world will witness.

Oceanic iron fertilization: A technofix or a green solution?

Most efforts to address climate change so far have aimed to reduce our carbon dioxide emissions. Unfortunately, these efforts haven’t had the effect that people had hoped for. Emissions appear unlikely to decline anytime soon, due to the rapid economic growth of developing countries. Carbon capture and sequestration turned out to be more difficult than people had assumed, as most of these experiments have been cancelled.

It would have been possible decades ago to come to an agreement on the need to end economic growth and reduce our overall energy use and material consumption. As Tim Jackson has shown, nine billion people by 2050 with the standard of living currently enjoyed in the EU would have to emit CO2 at less than 2% per dollar of GDP of our current level, something that seems very implausible.

The focus thus began to lie on looking for new technologies, that would allow us to continue to pursue growth without suffering the consequences of climate change. As John Michael Greer has pointed out however, there is no rational reason whatsoever to assume that there has to be some source of energy out there that will have all the same advantages as fossil fuels, but without the catastrophic effects on our biosphere.

To assume that we can simply move on to another source of energy without suffering any disruption to our standard of living reveals an anthropocentric cognitive bias. As we have found out in the years behind us, there are limits to the degree of power that technology can give us over our natural environment. The power technology has granted us has also led us to develop a blind spot in our recognition of the degree to which we are dependent on the functioning of our ecosystems.

Of course, when we discover that we fail to reduce our fossil fuel use, there are also other options available to us. The other side of the equation, carbon sequestration by the biosphere, is not fixed in place but dependent on a variety of factors that include human land use.

One option available to us is known as oceanic iron fertilization. In large parts of the ocean, iron is thought to be the limiting element in carbon sequestration by algae. Under iron deficient conditions, every atom of iron can be used by algae to sequester 106,000 atoms of carbon. As a consequence, iron fertilization theoretically allows us to sequester a lot of carbon dioxide, at very low costs. In practice results have been somewhat disappointing however. Sequestering 1000 ton of carbon requires a single ton of iron.

Of course, there are a lot of reasons to be skeptical and worried about the potential impact that such interventions can have. We are after all intervening in a complex system, which have a tendency to react in ways that are difficult to anticipate in advance. Even experiments done on a limited scale may not adequately translate into the large scale projects that would help us to cope with climate change.

Fortunately, we can observe what happens when oceanic iron fertilization occurs through natural processes. Volcanic eruptions fertilize the ocean with iron from their dust at a very large scale, at seemingly random intervals. The ecological harm done by such fertilization seems to be very limited.

The argument, that iron fertilization represents an intervention in the ecosystem, can also be turned around, to argue instead that iron fertilization fills a hole in the ecosystem, left by human activity. The rate at which carbon dioxide enters our atmosphere is extremely rapid, we know of no geological analogue. As a consequence, certain slow negative feedback processes take a long time to become relevant and prevent escalating temperatures.

As an example, a gradual increase in temperatures is normally associated with an increase in icebergs that travel across the ocean and deposit nutrients in the ocean, including iron. This process is thought to sequester a significant share of the carbon dioxide in the atmosphere. Because our carbon emissions are now so abnormally rapid, oceanic iron fertilization could be interpreted as an attempt to substitute for the natural negative feedback that takes time to emerge, until it has time to kick in.

In addition, there is another problem that humans have caused, in which oceanic iron fertilization can play an important role. The decline in whales has likely led to a decline in algae. The whales defecate and dive to deep depths, continually circling iron throughout the ocean, sequestering large amounts of carbon dioxide in the process and helping to keep the oceans they inhabit full of life. The decimation of whales in recent centuries has reduced the cycling of iron.

For humans to add iron to the oceans could thus be seen as performing a compensatory role that whales can not currently adequately perform due to their low numbers. An increase in algae due to iron fertilization may have the effect of increasing whale numbers. For this reason, studies that find that the sequester carbon dioxide does not sink to the bottom of the ocean, are not per definition failure, if it means that the nutrients continue to be recycled throughout the food chain.

Estimates of the role that iron fertilization can play reveal a limited impact. One estimate, where iron limitations in the ocean are eliminted globally, calculates that 33 parts per million could be shaved off our atmospheric CO2 levels by 2100. This however, is an underestimate of the full impact that fertilization can have.

Life creates the conditions that allow more life to come into existence, algae are perhaps a prime example of this rule. Algae blooms are brightly colored, increasing the planet’s albedo. An example of such an algae bloom is shown below:

france-a2004167-1335-148-500m

This example in particular is a coccolithophore bloom, which are becoming increasingly common due to our changing climate. The bright color increases the planet’s albedo, thus reducing global temperatures by reflecting more sunlight. In addition to this, algae produce large amounts of dimethylsulfide, which increases the generation of clouds, reflecting more sunlight and further aiding in the stabilization of global temperatures. The total potential effect on global temperatures is thus likely to be much larger than hitherto estimated based on sequestered carbon dioxide alone.

It seems to me that the top priority for environmentalists should currently be to preserve a habitable Earth for as many species as possible. Oceanic iron fertilization can play a role in this, as it fills a gap created by human interventions in natural processes. It does not have the potential to avert a catastrophe on its own, but it should not be dismissed as part of a broader plan when combined with other options that will hopefully allow us to avoid a global catastrophe as our fossil fuel use continues unabated.