CHAPTER 2

Medical Care

A History

Aren’t you glad you live in the 21st century rather than the 17th, 18th, or 19th century when medical care was so primitive that a minor ailment often meant an early—and certain—death? And if you were fortunate to survive childbirth and infancy, some of the best medical treatments were home remedies that treated several illnesses. Although the practice of medicine can be traced to the Greeks and Romans, it wasn’t until the major breakthroughs in the late 19th and early 20th centuries that infants, children, and adults were able to survive because of the discovery of life-saving drugs and vaccines. Prior to these life-altering medicines and therapeutics, for most human beings, life was short and sometimes painful.

The great advances in medicine, hygiene, and nutrition have boosted life expectancy from just 40 years in 1860 to nearly 79 years in 2020. Since 2014, life expectancy has declined marginally in America; the major factors for this phenomenon are attributed to poor diets, a sedentary instead of an active lifestyle, rising medical costs, and increased rates of suicide.1 In addition, drug use has curtailed the lives of many young adults and other demographic groups who have suffered from depression and other mental illnesses, especially during the first wave of the COVID-19 epidemic. Nevertheless, the survival rate for many “death sentence” diseases, such as cancer and other chronic conditions have improved markedly in the 21st century. In addition, the survival rate for COVID-19 is more than 99+ percent for most of the population. For the elderly (age 65+) in general, the COVID-19 death rate was 93 per 100,000, which varies widely by state depending on the vaccination rate.2 Which begs the question, what were the underlying health conditions of the individuals who died from COVID-19?

Undoubtedly, a COVID-19 type of pandemic in the 18th or 19th century would have caused a substantial portion of the population to die. Although the death toll from COVID-19 reached 955,000 by the early 2022 out of a US population of 332 million (less than 0.29 percent mortality rate), assuming, of course, the death count is accurate and does not merely indicate individuals who died with COVID instead of from COVID, surviving this alleged virulent illness in America today is very high.3

How has medical care evolved since the time when getting sick meant only at best a 50–50 chance of survival? Today, we have seen enormous success in contemporary medical practices and the saving of millions of lives. Let’s look back at the earliest days of the practice of medicine to learn how knowledge has been discovered, which has led to the enormous increase in life expectancy.

Early Physician Care

In colonial America, the practice of medicine can be divided into two components—rural medicine and urban treatment. In rural areas where physicians were few and far between, home remedies were used to deal with typical medical problems. Occasionally, a physician would visit a rural resident and pull out a rotten tooth after which the individual would be in substantial pain. Payments were usually made in kind—food, other agricultural products as well as such items as handkerchiefs. Follow-up visits were weeks and months in the future, if at all.4 Meanwhile, as America urbanized in the early part of the 18th century, physicians and patients had more frequent contacts. Physicians typically kept detailed records of their visits, the ailment being treated and payment received, leaving a treasure trove of data regarding the practice of medicine in colonial America.

Cotton Tufts, a Harvard-educated physician, in the late 18th century treated fevers as well as stomach and throat illnesses with “a solution of wine and lemon juice, salt, loaf sugar and distilled cordial water served in a wine glass.” In short, American physicians used “natural” treatments, a reflection of the influence of the Caribbean trade routes. Scholars concluded colonial physicians also relied on the expertise of Native Americans’ knowledge of indigenous plants and herbs to treat patients.5

Even in the 1700s physicians disparaged women midwives and local healers who did not have formal medical training and lobbied to have New Jersey, for example, establish “professional medical standards” to prevent “quackery.” These efforts, in effect, limited the supply of doctors “to white male physicians trained in European-influence schools of thought.”6

Undoubtedly, the outbreak of smallpox was the earliest medical challenge to the colonists during the American Revolution and the early years of the fledgling American republic. George Washington who contracted smallpox in 1751 on the island of Barbados had acquired immunity. When the American Revolution began, the disease was ravaging his troops in their battle for independence. Washington made the decision after consulting with his physician advisors to inoculate his troops with the virus so they would be protected from a full-fledged illness. It worked. The contagion was contained, and as they say the rest is history. America became an independent nation.7

Prior to 1820 medical training primarily consisted of apprenticeships. Soon proprietary medical school founded typically operating independently of a university or hospital. Licenses were usually not required to practice medicine and a reaction to the more or less unregulated medical marketplace doctors formed the American Medical Association (AMA) in 1847. Ostensibly, the purpose of the AMA was to establish high standards for medical doctors and impose sound clinical principles in the practice of medicine.8

Not surprisingly, the Civil War would have a major impact on the practice of medicine. With large numbers of soldiers huddled closely together, disease transmission became a major concern. This concern prompted the establishment of public health boards and more research into infectious diseases. The casualties on the battlefield led to Civil War surgeons returning home with new techniques to treat residents.9

This era could be classified as the “professionalization” of medical care. State licensing of physicians became more common, medical specializations increased, and with it associations of doctors who no longer would be considered general practitioners. And government funding of medical facilities began modestly.10

The Modern Age: Remedies and Quacks

The modern age of medicine began on the eve of the Civil War, when at least 55,000 physicians were practicing, making the United States “one of the highest per capita number of doctors in the world (about 175 per 100,000).” A decade later, the number of physicians jumped to roughly 62,000. Most physicians were conventional doctors but some were homeopaths (approximately 5,300) and less than 3,000 were eclectics, who treated patients with herbs and other noninvasive therapies.11 The battle was on for the establishment of best practice medical protocols, which would have ramifications for the evolution of medicine. In addition, the path that medicine took more than 150 years ago would set the agenda for federal and state legislation, the costs of medical care, and ultimately the effects on the doctor–patient relationship.

Like many myths about episodes in American history, economist Dale Steinreich challenges the conventional wisdom that some early medical practices were based on “snake oil” salesmen traveling from town to town with their ineffective or worse poisonous “remedies” duping a suspecting public. Steinreich emphasizes this is a myth perpetrated by moviemakers who based their portrayals on the medical establishment’s assertion that conventional medicine is sound, even though early treatments such as bloodletting and the use of metals were killing patients. The real reason, according to Steinreich, that conventional medicine practitioners (allopaths) were adamantly opposed to both homeopaths and eclectics, is that they cut into the allopaths’ incomes.

Soon the AMA began to push for legislation that would reduce the number of medical schools and hence the supply of doctors. In some states, the AMA would call for the outright ban of homeopaths and eclectics, which would further reduce the supply of medical practitioners. And to add insult to injury, in 1870, the AMA prohibited women and blacks from joining the organization.12

The most ominous development in the history of medical care was the notorious Flexner Report, named after Abraham Flexner, brother of Simon Flexner, a director of the Rockefeller Institute for Medical Research. In short, the Flexner Report, a product of the Carnegie Foundation, was essentially the AMA’s Council on Medical Education (formed in 1904) 1906 report that recommend the closing of many medical schools, which declined from 166 at the time to 77 during World War II. Rural medical schools were closed in great numbers and only two medical schools were allowed to remain open. Not surprising, the law of supply and demand worked its magic, physicians’ incomes rose dramatically as the supply of new doctors were curtailed markedly.13

Good news for physicians but bad news for the American people. Government intervention in medicine has driven up prices for patients under the guise of “protecting” the public from unscrupulous nonconventional practitioners, who may not have attended AMA- and state-approved medical schools. This is another example of “regulatory capture,” where one interest group uses the power of the state to squelch competition and drive up prices.

The Committee on the Costs of Medical Care (CCMC) was created (1925) and funded by the Carnegie Corporation, other private foundations, with assistance from the AMA, the American Hospital Association, government agencies, and think tanks to address medical price inflation. In 1932 the CMCC published a comprehensive report based on previous studies, asserting the increase in medical care costs were somehow the result of new technologies and other scientific breakthroughs as well as greater utilization of hospitals, and increased costs of medical training and supplies. However, restrictions of the supply of physicians is the primary culprit in the medical price inflation that began when policies enacted by state governments at the behest of the Flexner Report became widespread.14

As the supply of doctors was restricted so too was the supply of hospitals. For-profit hospitals felt the sting of burdensome regulations and had to play on an unleveled playing field in competition with nonprofit hospitals, which did not have to pay income or property taxes. Furthermore, government subsidies and tax-deductible contributions gave nonprofit hospitals the financial edge over their for-profit counterparts. In the 1920s, 60 percent of all hospitals were for-profit, and that percentage dwindled to 11 percent by the late 1960s.15 However, for-profits comprised nearly 25 percent of all hospitals in 2019.

Community Self-Help Groups

Legendary investor Warren Buffett, who has been CEO of Berkshire Hathaway since 1965, once quipped something to the effect “there are two types of competition I don’t like—foreign and domestic.”16 Competition tends to lower prices for consumers as producers have to make sure they are providing value for their customers; otherwise, they will patronize a competitor who is providing a better value. It is no different in the professions. Competition among practitioners means consumers can shop for the highest quality, low-cost provider. The history of self-help organizations is yet another example of attempts to squash competition and thus drive up prices for consumers.

As David T. Beito recounts in his history of mutual aid organizations, fraternal societies provided affordable medical services to their members during the late 19th and early 20th centuries. A local lodge would hire a physician who was a salaried employee of the fraternal society. Members of the lodge were enthusiastic about having access to quality, low-cost medical care. One of the criticisms from the medical profession was the under-cutting of prevailing fees by fraternal societies.17

The concept of contract practice dates to the colonial when plantation owners hired doctors to treat slaves. This practice continued in many industries after the Civil War. Labor unions used contract practice to provide medical care for miners. In New Orleans Black mutual aid societies hired physicians to treat their members on a per capita basis. In the early part of the 20th century, the lodge practice was making inroads in Chicago and New York and was mainly popular among Italians, Greeks, and Jews.18

In New York City, for example, dispensaries were created and provided low-cost medical care. Doctors who worked in dispensary were volunteers or donated most of their time for free. New Orleans had a robust tradition of fraternal societies providing medical care to members. Some of these organizations were founded well before the Civil War. A major impetus for the creation of mutual aid societies in New Orleans was the need to protect people from the vicissitudes of life and the dearth of government’s social welfare spending. Nevertheless, the lodge practice thrived for many decades because doctors were typically “on call” for members.19

The lodge practice peaked in the 1920s after the medical profession “launched an all-out war.” In many states, medical societies sanctioned physicians who entered into lodge contracts. In addition, county societies were actively undermining lodge practice.20 The relentless assault on a community solution for low-income families reveals how using the power of the state and cartel-like arrangements in medical care reduced the availability of physician services. This is yet another reminder of the adage that some have called an enduring myth, “we are from the government and here to help you.”

Employer-Based Medical Insurance

The earliest example of employer-based medical insurance occurred in Oregon and Washington in the early 1900s, when companies began to cover the medical costs of their timber and mining workers. To keep costs down, adjusters scrutinized the fees and procedures as well as hospital stays of injured workers. Physicians resented someone looking over their shoulders regarding their practice of medicine.

Meanwhile, as hospitals expanded, they needed a consistent income stream to pay for their fixed costs. To address this financial issue and to help employees pay for hospital costs, Dallas schoolteachers, in 1929, negotiated a contract with Baylor University Hospital for hospital insurance. Teachers paid six dollars per year in premiums; in return they would receive up to 21 days of hospital care. Soon, this idea spread and became the basis for Blue Cross, which began operations in Sacramento, California, three years later.

During the Great Depression, physicians were concerned about getting paid for their services because of the high unemployment gripping the nation and the rise of compulsory national health insurance plans. Physicians were also worried that with the rise of prepaid plans like Blue Cross, their fees could be curtailed if employers embraced a similar medical care model hospital for other third-party payers. To counteract these threats, physicians created Blue Shield (the first one began operations in 1939) as way to get paid for the services in the bleakest economic period in American history. I will discuss both Blue Cross and Blue Shield in-depth in Chapter 3.

The great leap forward in employer-based medical insurance occurred during World War II when the federal government-imposed wage-price controls to dampen the inflationary pressures of the Federal Reserve’s easy money policies and military spending. Employers were prevented from raising wages and offered tax-free fringe benefits like medical insurance to attract workers. The die was cast—employees now expected business to provide health insurance as part of their compensation package.

A clarification is in order. Comprehensive “health insurance” is in effect the responsibility of every adult, who has a personal obligation to himself or herself to lead a healthy lifestyle to prevent virtually all illnesses. However, we know that there are diseases that run in families—cancer, heart disease, diabetes, and so on—and therefore have to be addressed throughout one’s lifetime. Medical insurance, on the other hand, is, therefore, necessary to pay for so-called big-ticket medical procedures that are out of the financial reach of the average employee. Pooling resources via insurance or any other method to reduce the financial risk for major medical illness is both prudent and wise. But medical insurance has evolved into a prepaid plan for virtually all medical costs, a distortion of the primary principle of insurance, which is to cover unexpected, catastrophic expenses, not usual and common doctor visits and procedures.

Employer-based medical insurance thus is another example of the law of unintended consequences. Employers never envisioned before World War II that they would be responsible for their employees’ medical insurance. In short, the federal government’s economic policies were the spur for this arrangement. Financially sound employee compensation packages would focus on wages and salaries and employees would then allocate their incomes for retirement savings and medical care costs, including catastrophic insurance. Of course, employers could provide their employees with vendors who would make their “pitch” to them so they could make wise choices regarding their individual/family needs. I will review the medical insurance markets in Chapter 4 and how they evolved in Chapter 5.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset