A Christian Case Against Christmas

About a year and a half ago, I befriended an unusual, non-denominational Christian family, who struck me as God-fearing, Spirit-led folks. I quickly found out that they did not celebrate Christmas, and in fact thought it was wrong or at least ignorant for any Christian to do so. My initial response was something to the effect of, “Umm, you’ve got to be kidding me.”

Fast forward to the present (no pun intended), when I find myself wrestling through whether to continue to celebrate Christmas.  

Sometime in the late 2nd century or early 3rd, the esteemed church father Tertullian penned a book titled “De Idolatria,” which is Latin for “On Idolatry.” Therein, he lamented Christians’ celebration of “Saturnalia”—an ancient Roman festival honoring the god of Saturn in late-December.  (See Wikipedia’s “Saturnalia” page, which uses the word “Christmas” 25 times).

More specifically, Tertullian wrote:

By us [Christians], who are strangers to Sabbaths, and new moons, and festivals, once acceptable to God, the Saturnalia, the feasts of January, the Brumalia, and Matronalia, are now frequented; . . . oh, how much more faithful are the heathen to their religion, who take special care to adopt no solemnity from the Christians.

In other words, rather than practicing the Sabbath or celebrating festivals designed for God’s people, Christians were taking part in pagan holidays to the chagrin of Tertullian.

The antipathy to Christmas has not been limited to the early church or the church fathers.

In fact, “[t]he contemporary war [over] Christmas”—which rails against the use of “Xmas” or “Happy Holidays” rather than “Merry Christmas,” for example—“pales in comparison to the first—a war that was waged not by retailers but by Puritans who considered the destruction of Christmas necessary to the construction of their godly society.” (NYT, “Yuletide’s Outlaws”). In 1647, an English Puritan government canceled Christmas. In 1659, a Massachusetts Puritan government outright banned the holiday.

Why did they do this? “Puritans argued (not incorrectly) that Christmas represented nothing more than a thin Christian veneer slapped on a pagan celebration. Believing in the holiday was superstitious at best, heretical at worst.”  This was despite the fact that, “In the early 17th century England, the Christmas season was not so different from what it is today: churches and other buildings were decorated with holly and ivy, gifts were exchanged and charity distributed among the poor.” Id.

Seventeenth century Massachusetts minister Increase Mather explained the Puritans’ rationale for not celebrating Christmas this way: “the early Christians who first observed the Nativity on December 25 did not do so thinking that Christ was born in that Month, but because the Heathens’ Saturnalia was at that time kept in Rome, and they were willing to have those Pagan Holidays metamorphosed into Christian one.”

While the Massachusetts ban on Christmas only lasted until 1681, the Puritans’ war on Christmas persisted throughout the 18th century and well into the 19th. In fact, the U.S. government did not recognize Christmas as a federal holiday until 1870.

Today, however, the vast majority of Christians, at least in America, consider Christmas to be a Christian holiday. Even the Wikipedia page for Christmas recognizes this, opening with, “Christmas is an annual festival commemorating the birth of Jesus Christ.” Sure, maybe Santa Claus and consumerism tend to obscure or take away for the “reason for the season”—but the reason nonetheless remains. Heck, it’s even in the name—Christ-mas.

In deciding whether to celebrate Christmas as a Christian, I think a good starting point is to decide whether Jesus was in fact born on or near December 25. If he was, then any contemporaneous pagan holidays seem like coincidence at worst, or part of God’s providential plan to overshadow and ultimately stamp out these pagan holidays at best.

Unfortunately, it appears to be scholarly consensus that it is highly unlikely Jesus was born anywhere near December 25. First, it is improbable that Judean shepherds would have been “keeping watch over their flock by night” in late-December. (Lk. 2:8). In his 19th century book “The Two Babylons,” pastor and author Alexander Hislop explains:

Now, no doubt, the climate of Palestine is not so severe as the climate of this country; but even there, though the heat of the day be considerable, the cold of the night, from December to February, is very piercing, and it was not the custom of the shepherds of Judea to watch their flocks in the open fields later than about the end of October.

Second, it is equally improbable that the Roman governor Quirinius would have ordered the Jewish people to submit to a census in the dead of winter. “At the birth of Christ every woman and child was to go to be taxed in the city whereto they belonged, whither some had long journeys; but the middle of winter was not fitting for such business, especially for women with child and children to travel in.” (17th century English scholar, Joseph Mede).

Mede then writes:

And if any shall think the winter wind was not so extreme in these parts, let him remember the words of Christ in the gospel, ‘Pray that your flight be not in the winter.’ If he winter was so bad a time to flee in, it seems no fit time for shepherds to lie in the fields in, and women and children to travel in.

Thus, we can be reasonably assured on this basis alone that Jesus was not born in December. But does Scripture shed any light on the subject?

Many scholars attempt to ascertain Jesus’s birth date by turning to the conception of John the Baptist. This is because the Gospel of Luke indicates that Mary conceived Jesus about six months after Elizabeth conceived John the Baptist. (See Lk. 1:23-26). Thus, if we can figure out when John the Baptist was conceived, we can skip ahead fifteen months to get an idea of Jesus’s birth date.

Luke tells us that that Zechariah (John the Baptist’s father) “belonged to the priestly division of Abijah.” (Lk. 1:5). Abijah was one of 24 divisions of priests. Each division was to perform priestly duties for one week, twice per year. King David enacted these divisions, as reflected in 1 Chronicles 24. While the priestly divisions collapsed during the period of Babylonian captivity, they were restored thereafter. Even first century Jewish historian Josephus indicates they were restored with their original names of the 24 divisions from the Davidic period.

Zechariah’s division, Abijah, was the 8th division. (1 Chr. 24:10). In Luke 1:8-11, we read that, “Once when Zechariah’s division was on duty and he was serving as priest before God, . . . . an angel of the Lord appeared to him” and told him that his wife would “bear a son.”

Several verses later, Luke then writes, “When [Zechariah’s] time of service was completed he returned home. After this his wife Elizabeth became pregnant and for five months remained in seclusion.” (Lk. 1:23-24). Two verses later, Luke writes, “In the six month of Elizabeth’s pregnancy, God sent the angel Gabriel to Nazareth” to tell Mary that she “will conceive and give birth to a son, and you are to call him Jesus.” (Lk. 1:26-31).

After factoring in that all the priestly divisions served during the major festivals (rather than one at a time as typical) and that the Jewish calendar started in what is our March, scholars suspect that Zechariah was serving in late May or early June if it was Abijah’s first service of the year, or December if it was his division’s second service of the year. Fifteen months from May or June is August or September. Fifteen months from December is March. Thus, there is good reason to believe that Jesus was born in either the spring or the late-summer or early fall, not winter.

If Jesus was not born anywhere near December 25, then how did his birthday come to be affiliated with this date? My earlier paragraphs have already made this clear, but it bears repeating: the holiday appears to be born out of a co-opting of the Roman Empire’s celebration of the winter solstice.

The Romans celebrated Saturnalia, “a festival of light leading to the winter solstice, with the abundant presence of candles.” (Wikipedia’s “Saturnalia” page). “During the Roman mid-winter festival of Saturnalia, houses were decorated with wreaths of evergreen plants, along with other antecedent customs now associated with Christmas.” (Wikipedia’s “Christmas tree” page) “People visited and feasted with one another, giving presents, and decorating their homes with candles and evergreen branches.” (“Ancient Origins of Christmas”).

Saturnalia was generally celebrated on December 17 or 23. It paved the way for “Dies Natalis Solis Invicti,” another Roman festival—this one in recognition of the “Birthday of the Unconquerable Sun” on December 25.

In Egypt, which was a subdivision of the Roman empire, they celebrated that “the Goddess Isis bore the Holy Child Horus on December 25th.” (“Isis & the Holiday Tree”). Apparently, the Egyptians, like the western part of the Roman empire, decorated their homes with trees. They used palm trees, whereas the west used firs.  

Moreover, the Romans in Arabia are believed to have celebrated the birth of their God on December 24th—the birth of the moon, who they took to be their god. And the inhabitants of modern-day Scotland had a similar practice.

In other words, the practice of the winter solstice was widespread throughout the vast Roman Empire. In 313, Constantine, under the banner of Christianity, then came into power as the new Roman emperor.

Shortly thereafter, in 336 A.D., the church in Rome recognized Christmas as a holiday. In his book “The Two Babylons,” Scottish pastor Alexander Hislop explains:

Why, thus: Long before the fourth century and long before the Christian era itself, a festival was celebrated among the heathen, at the precise time of the year, in honor of the birth of the son of he Babylonian queen of heaven; an it may fairly be presumed that, in order to conciliate the heathen, and to swell the number of nominal adherents of Christianity, the same festival was adopted by the Roman church, giving it only the name of Christ.

Hislop continues, “Upright men strove to stem the tide, but in spite of all their efforts, the apostacy went on, till the Church, with the exception of a small remnant, was submerged under Pagan superstition.”

Is this a fringe, Christian conspiracy theory, or based in truth? For that, let us turn to some secular sources.

Encyclopedia Britannica’s “Christmas” page provides:

In ancient Rome, December 25 was a celebration of the Unconquered Sun, marking the return of longer days. It followed Saturnalia, a festival where people feasted and exchanged gifts. The church in Rome began celebrating Christmas on December 25 in the 4th century during the reign of Constantine, the first Christian emperor, possibly to weaken pagan traditions. . . . None of the contemporary Christmas customs have their origin in theological or liturgical affirmations, and most are of fairly recent date.

The History Channel’s “History of Christmas” page provides:

The middle of winter has long been a time of celebration around the world. Centuries before the arrival of the man called Jesus, early Europeans celebrated light and birth in the darkest days of winter. . . .

In Scandinavia, the Norse celebrated Yule from December 21, the winter solstice, through January. In recognition of the return of the sun, fathers and sons would bring home large logs, which they would set on fire. . . .

In Germany, people honored the pagan god Oden during the mid-winter holiday. . . .

In Rome, where winters were not as harsh as those in the far north, Saturnalia—a holiday in honor of Saturn, the god of agriculture—was celebrated. . . .

In addition, members of the upper classes often celebrated the birthday of Mithra, the god of the unconquerable sun, on December 25.

A 2018 Newsweek article titled “The Origins of Christmas: Pagan Rites, Drunken Revels and More” provides:

Hundreds of years before the birth of Christ, Romans exchanged gifts, sang songs and decorated their homes with evergreens. Instead of Jesus Christ, though, Saturnalia celebrated the Roman god Saturn. In fact, December 25 was the winter solstice on the Roman calendar, the shortest day of the year. We can still see the pagan origins of Christmas in many holiday traditions, including mistletoe, which symbolized fertility to pre-Christians and new life even in the depths of winter.

Also worth considering is whether the Christianization of the winter solstice may have been a veiled means of antisemitism, further distancing Jesus from his Jewish identity. A Messianic Jewish website named the Jewish Voice writes:

The Church, in 325 C.E. under Constantine, went to great lengths to separate faith in Yeshua from its natural and biblical Jewish identity. . . . As a result, the Jewish community views Christmas and Easter as holidays of a different religion that are not for Jewish people. Many Messianic Jews believe that celebrating Christmas could easily contribute to their Jewish families and friends believing that they have “stopped being Jewish” and “converted” to a different religion.

If I was a Christian in the Roman Empire in the 3rd or 4th century when Constantine formally Christianized the winter solstice, I highly doubt I would felt okay celebrating Christmas. In fact, I might have found it to be anathema. I think to Paul’s admonition to the church at Corinth: “Do not be yoked together with unbelievers. For what do righteousness and wickedness have in common? Or what fellowship can light have with darkness?” (2 Cor. 6:14). Or consider all the verses in the OT commanding the Israelites to abstain from having anything to do with foreign gods. The winter solstice historically was all about other gods—not just one, but perhaps a plethora. According to Michael Heiser’s recent landmark book “The Unseen Realm,” these gods could have been or represented actual fallen elohim, i.e., demonic entities.

The question, at least for me, then becomes whether the significant passage of time from the demise of these pagan origins alters the analysis. Are the roots adequately buried or severed so as to make the current practice of Christmas acceptable or glorifying to God? And while “man looks at the outward appearance,” does not “the Lord look[] at the heart”? (1 Sam. 16:7). Is not the heart of many Christians to worship God during Christmas?

And what is the fruit of Christmas? Millions of folks will have occasion to reflect on Jesus’s birth and “the reason for the season.” Undoubtedly, many have come to know Him through Christmas services, books, songs, or conversations.   

On the flip side, Americans purchased 32.8 million real Christmas trees last holiday season. Tens of millions of others erected fake trees. From a 30,000 foot view, might God find it highly offensive that so many residents of a “Christian nation” excitedly put something in their home that has no connection to Him or Jesus, and very much appears to be rooted in the worship of pagan deities?

Then, the consumerism. “In 2021, total holiday retail sales were projected to have reached new highs of almost 850 billion U.S. dollars.” (Statista) Is this consumerism not a grave hindrance to the “reason for this season”? When I think to Christmas, I might very well first think to gifts, family time, trees, lights, etc.? Are these things not what is and will always be most powerful about the holiday in our culture, perhaps even in Christian culture? Would not God have told us to celebrate Christmas had he wanted that?

At present, I am seriously entertaining a tapered version of the Puritan view that “Christmas represent[s] nothing more than a thin Christian veneer slapped on a pagan celebration. Believing in the holiday was superstitious at best, heretical at worst.” It is difficult for me to imagine Jesus or the apostles sanctioning the celebration of Christmas on December 25th in the first few centuries. And if they would not have done it then, would they lend credence to it now? They would certainly understand current Christians’ widespread celebration of it in light of the buried history. But would they support it? I don’t know. I tend to think not.

A Short Story of Roe v. Wade, Part 2

In Part 1, we left off in January 1972, when Rehnquist and Powell had just been sworn in as the latest Supreme Court Justices—thereby replacing Black and Harlan, who had resigned about three months beforehand. Less than a month prior to their swearing in, the Court heard oral arguments in Roe v. Wade (challenging Texas’s abortion law) and its sister case Doe v. Bolton (challenging Georgia’s abortion law, which had just been passed a few years prior and was one of the most liberal in the country at the time).

Before picking back up with the story, it bears repeating some of the relevant history. Remember, Roe and Doe came to the Court on appeal with no record from the lower courts: no trial, no discovery, no real documentary evidence, no expert reports, no witnesses, and no cross-examination, but rather just two hour-long hearings. It was rare, if not unprecedented, for the Court to decide a case on the merits under such circumstances.

Moreover, just six years prior, not a single state in the country expressly legalized abortion in cases of rape. Abortion was virtually uniformly prohibited except in instances to protect the life of the mother. European laws were similar aside from exceptions in the likes of Russia, Sweden, and Nazi Germany.

Yet, the sexual revolution in the developed world was changing cultures at a blistering speed and pressuring lawmakers to follow suit. As a result, between 1967-1971, thirteen U.S. state legislatures passed laws permitting abortion in cases of rape/incest and significant deformity to the child. These new “reform” laws were the most liberal in the country, and yet remarkably conservative in comparison to the ultimate holding in Roe—which would literally invalidate every single abortion law on the books in the country.

Prior to the hand down of Roe and Doe in 1973 and emboldened by the successful passage of the “reform” laws in thirteen states from 1967-71, “activists decided that the reform laws were not allowing enough abortions and concluded that complete repeal was necessary.” (Clarke Forsythe, “Abuse of Discretion”). However, as noted by pro-choice academic and historian David Garrow, “In virtually every state where a repeal bill had been introduced in the legislature . . . prospects for passage appeared to range from bleak to nonexistent.”

Discouraged by the dismal prospects of repeal in the state legislatures, abortion rights activists began to focus more of their attention on the courts—albeit with limited success. In the years leading up to Roe, twelve courts had struck down state abortion laws; however, twenty-one had upheld state abortion laws and several others had thrown abortion cases out on procedural grounds. (Forysthe). Thus, the activists’ focus would soon center around the U.S. Supreme Court.

All right, back to the story. Immediately after the U.S. Supreme Court conducted oral arguments in Roe and Doe in December 1971, Justice Douglas took it upon himself to begin drafting an opinion striking down both Texas’s and Georgia’s abortion laws. This was because “[t]he Black and Harlan vacancies gave the four justices who favored striking down the abortion laws—Brennan, Douglas, Marshall, and Stewart—a great incentive to decide Roe and Doe without the votes of Powell and Rehnquist.” (Forsythe). Remember, Nixon—who was notably opposed to abortion—had just appointed Powell and Rehnquist.

However, before Douglas’s opinion could gain any traction, Chief Justice Burger informed Douglas that he had assigned the opinion to his childhood friend, Justice Blackmun—who Nixon had appointed the year before. (Forsythe).

Blackmun was a notoriously slow opinion writer. Perhaps this was because he “often doubted his own ability to do the job, and suspected that other Justices, like Hugo Black, Potter Stewart, and William O. Douglas, shared his doubts.” (Forsythe). Blackmun was also not nearly as pro-abortion as Douglas. In fact, it was not abundantly clear where Blackmun stood on the issue. As noted by the NYT, “few people would have predicted that this soft-spoken, 61-year-old judge [Blackmun], a lifelong Republican [from the Midwest] never known for breaking new ground or challenging the status quo, was about to embark on an extraordinary personal journey” and “become a passionate defender of the right to abortion.”

On January 17, 1972, Burger issued a memo to his fellow justices asking them whether Roe and Doe should be reargued before a full court now that Rehnquist and Powell were on board. Blackmun responded, voting in favor of reargument. The issue was left undecided, however, and several months passed by.

On May 18, 1972, Blackmun circulated draft opinions in Roe and Doe striking down both abortion laws on the basis that they were unconstitutionally vague. Justices Brennan, Douglas, Marshall, and Stewart joined the opinions without delay despite not being content with their scope. In fact, the very same day Blackmun distributed his draft opinion, Brennan wrote a memo to Blackmun exhorting him to revise the opinion to decide “the core constitutional issue,” i.e., to create a new constitutional right to abortion.

On May 29, 1972, Justice White issued a dissent critiquing Blackmun’s draft opinion. White wrote that “[i]f a standard which refers to the ‘health’ of the mother . . . is not impermissibly vague [as the Court had expressly held the year before in U.S. v. Vuitch], a statutory standard which focuses on ‘saving the life’ of the mother would appear to be a fortiori acceptable.” In other words, “life” is far less vague than “health,” and the Court had already decided that abortion for the health of the mother was not unconstitutionally vague—so the basis for Blackmun’s opinion was patently illegitimate.

NYU law professor Bernard Schwartz wrote that White’s dissent “effectively demonstrated the weakness of the Blackmun vagueness approach in striking down the Texas law.” Garrow, a self-described democratic socialist who has taught at Duke and UNC among other places, described White’s dissent as “incisive and influential” and an “ironic contribution to the Court’s consideration of Roe and Doe.” What Garrow meant by this is that “by effectively rebutting the vagueness rationale, [White] pushed the Court’s majority to go beyond vagueness and strike down the abortion laws under the broader ground of the Ninth or Fourteenth Amendments.” (Forysthe).

Two days after the release of White’s convincing dissent, Burger renewed his motion for additional oral argument. Blackmun, despite having just penned a draft opinion, again agreed. The four liberals, however, were irate—particularly Brennan and Douglas. They were concerned the decision could go 5-4 against them if Blackmun could be won over by White and company.

Brennan handwrote a note to Douglas stating:

I will be God-damned! At lunch today, [Stewart] expressed his outrage at the high-handed way things are going, particularly the assumption that [Burger] can order things his own way, and that he can hold up for nine anything he chooses, even if the rest of us are ready to bring down 4-3’s for examples. . . . [Stewart] wants to make an issue of these things—perhaps fur will fly this afternoon.

In the following days, Powell, Rehnquist, and White all voted in favor of reargument, which made for a 5-4 majority in favor of reargument.

Douglas was beside himself. On June 1, he sent a protest letter to Burger. On June 2, “Douglas sent Brennan the draft of a scorching dissent that he threatened to publish if the majority voted to rehear the abortion cases.” (Forsythe). In fact, historian and author James Simon, who interviewed Blackmun in 1991, noted that:

Douglas refused to withdraw his dissent until Blackmun personally assured him that his position of declaring the abortion statues unconstitutional was firm. And that he had no intention of reversing that position after reargument. Blackmun gave Douglas that assurance. . . [A]s it turned out, Justice Douglas was the biggest winner of all. His prolonged tantrum had produced a firm commitment from Justice Blackmun to hold to his original position of voting to strike down the Texas and Georgia statutes.

On June 26, 1972, the Court issued its order for reargument, which would occur in October 1972. Douglas’s draft dissent was nonetheless leaked to the press, and the New York Times and Washington Post ran stories about it. Garrow believed that Stewart leaked the dissent because of his disdain for Burger. (Forsythe).

As usual, the Court recessed for the better part of the summer. Blackmun, who had previously served as “resident counsel” for the Mayo Clinic, spent about two weeks there in July, “reportedly doing research on the history of abortion and the Hippocratic oath.” (The Hippocratic Oath was known to disavow abortion: “I will not give a lethal drug to anyone if I am asked, nor will I advise such a plan; and similarly I will not give a woman a pessary to cause an abortion,” it reads in pertinent part.)

While the justices recessed for the summer, their law clerks did not. Justice Blackmun’s law clerk George Freeman was working tirelessly on a revised opinion—one designed to circumvent Justice White’s critique and address “the core constitutional issue” like the liberal justices wanted. Near the end of the summer, the clerk drafted a memo to Blackmun, writing:

I have written in, essentially, a limitation of the right depending on the time during pregnancy when the abortion is proposed to be performed. I have chosen the point of viability for this “turning point” (when state interests become compelling) for several reasons: (a) it seems to be the line most significant to the medical profession, for various reasons; (b) it has considerable analytic basis in terms of the state interest as I have articulated it. The alternative, quickening, no longer seems to have much analytic or medical significance, only historical significance; (c) a number of state laws which have a “time cut-off” after which abortion must be strongly justified by life or health interests use 24 weeks, which is about the “earliest time of viability.”

As we will see in Part 3, viability would become a hallmark of the Court’s decision, despite the fact that “viability, and its implications, were never argued in the lower courts, never brief in the Supreme Court, and never mentioned, even once, during the four hours of oral arguments in December 1971 and October 1972.” (Forsythe).

I will pick back up with Part 3 within a week or so. Below is a photograph of the Court from 1973. As I noted in Part 1, the older-white-male dominated demographics of the Court is ironic for two reasons—both of which relate to our culture’s increasing disparagement of white men and “whiteness.” First, men have historically been more supportive of abortion rights than women in certain respects. Second, the U.S. abortion rate has always been far higher for African-Americans than it has been for whites.

20 Apr 1972, Washington, DC, USA — Original caption: This formal portrait of the U.S. Supreme Court Justices was made as the membership changed. Justices Powell and Rehnquist both took their seats on January 7th, 1972. Left to right in the front row is Potter Stewart, William O. Douglas, Chief Justice Warren E. Burger, Associate Justices William J. Brennan Jr., and Byron R. White. In the back row is Associate Justices Lewis Powell Jr., Thurgood Marshall, Harry A. Blackmun, and William H. Rehnquist. — Image by © Bettmann/CORBIS

A Short Story of Roe v. Wade, Part 1

It was the year 1971. Two abortion cases had just been appealed to the U.S. Supreme Court: the infamous Roe v. Wade—which was out of Texas—and its sister case Doe v. Bolton—which was out of Georgia.

The pseudonymed plaintiff-appellants Roe and Doe were Norma McCorvey and Sandra Cano, respectively. As of 1971, both women had already given birth to their children, and so some argued their cases were moot. In an effort to obtain an abortion in 1968, McCorvey had falsely claimed that she had been raped by a group of black men. After trying unsuccessfully to obtain an illegal abortion, McCorvey gave birth to a daughter and placed her up for adoption.

Cano, who was not seeking an abortion, had hired an Atlanta ACLU attorney named Margie Pitts Hames to work on her divorce and custody case. For reasons not entirely clear, “Hames applied Cano for an abortion without Cano’s knowledge. When the abortion was approved, Hames notified Cano, who strongly reiterated that she did not want an abortion.” Apparently concerned Hames would coerce her into having one, Cano fled to Oklahoma. Undeterred, Hames filed an abortion rights case on her behalf.

In the preceding four years before Roe and Doe, thirteen states—including Georgia—had enacted “reform” bills that had been drafted by the American Law Institute. At the time, these bills were seen as being progressive. They permitted abortion for three additional reasons—i.e., in addition to protecting the life of the mother: (1) rape and incest, (2) serious and permanent bodily injury to the mother, and (3) significant deformity of the unborn child.

Prior to thirteen states’ passage of these bills from 1967-71, virtually every state prohibited abortion outright, except in instances of danger to the mother. In fact, in 1966 Mississippi had become “the first U.S. state to allow abortion in cases of rape.” Prior to Roe, the vast majority of states still prohibited abortion in most instances. To be clear, these laws generally “did not punish women for inducing abortions,” but rather only the abortion providers. Only three states permitted abortion somewhat broadly.  

The U.S. was not an outlier in its restrictive position on abortion. The vast majority of countries, including Western European powers, had similar laws on the books. There were exceptions of course. In 1920, the Soviet Union “was the first country in the world to legalize all abortions.” In the 1930s, Poland “was the first country in Europe . . . to legalize abortion in cases of rape and threat to maternal health”; Nazi Germany had “amended its eugenics law . . . [to] allow[] abortion if a woman gave her permission, and if the fetus was not yet viable, and for purpose of so-called racial hygiene”; and Sweden legalized it on a limited basis.

However, the likes of France, Britain, Italy, the Netherlands, and most other countries around the world categorically prohibited abortion unless the mother’s life was in danger or her health would be permanently damaged.  This would remain the case for decades to come.

So, what happened that led to the rush of abortion rights in the late-1960s and 1970s? Well, keep in mind that the sexual revolution had been underway since the early-1960s in the States and around the rest of the developed world. As a result,“[t]he normalization of contraception and the pill, public nudity, pornography, premarital sex, homosexuality, masturbation, alternative forms of sexuality, and the legalization of abortion all followed.”

The sexual revolution was so culturally impactful that soon the U.S. Supreme Court began to weigh in on related issues—a case-in-point (pun intended) of the popular phrase “policy is downstream from culture.”

For example, in 1965, the Court established a constitutional “right of privacy” and found that it protected the rights of married couples against state restrictions on contraception. In 1969, the Court ruled that this newly memorialized “right of privacy” permitted private possession of obscene materials—i.e., states could not prohibit pornography possession.

Not only were the courts influenced by the sexual revolution, so were executive and legislative branches. For example, in 1968, President Lyndon B. Johnson’s administration released a report calling for a repeal of all abortion laws. Also in 1968, the UK’s “Abortion Act” took effect, which legalized abortion on wide grounds.

Also worth mentioning is the escalating fear in the 1960s and beyond that the world was experiencing a “Population Bomb”, which, if not curbed, would lead to worldwide famine and societal upheaval. As a result, “[i]n the late 1960s the U.S. government became a major funder of population control programs overseas and built multilateral support through establishment of the U.N. Fund for Population Activities.” Furthermore, organizations such as Planned Parenthood began to “champion[] both abortion rights and global population control policies,” many of which “were racist by any reasonable definition.” (Ross Douthat, NYT, “The Ghost of Margaret Sanger”).

With these things in mind, we now circle back to the Court’s review of Roe v. Wade and Doe v. Bolton beginning in 1971. Remember, Roe was about Texas’s abortion law, which prohibited abortion except in cases of danger to the mother. Doe was about Georgia’s abortion law, which was one of the new “reform” laws and thought to be relatively progressive.

Up on appeal from the local U.S. District Courts in Texas and Georgia, neither Roe nor Doe had undergone a trial. In fact, by the time the cases made their way to the Supreme Court

[t]he factual records in Roe and Doe were virtually nonexistent—consisting merely of a complaint, an affidavit, and motion to dismiss that addressed legal, not factual, issues. No factual hearing. No witnesses. No testimony. No cross-examination. Just two hour-long hearings, in which the judges addressed procedural and jurisdictional issues more than substantive questions. And then a direct appeal to the Supreme Court was made, without any intermediate appellate review.

Clark D. Forsythe, “Abuse of Discretion: The Inside Story of Roe v. Wade”.

The justice who would author the majority opinion in Roe—Justice Harry Blackmun—just four years later would write, “The problem is a complex one, about which widely differing views can be held, and, as such, it would be somewhat precipitate [i.e., impulsive] to take judicial notice of one view over another on the basis of a record as barren as this.”

Likewise, three years after Roe, Justice Marshall—who enthusiastically joined the majority opinion in Roe despite the case’s barren record—would make note that the Court would “decline[] to decide important questions regarding ‘the scope and constitutionality of legislation’ . . . in the absence of ‘an adequate and full-bodied record.'” Several other justices echoed this very same principle.

But this is precisely what the Supreme Court would end up doing in Roe and Doe—taking judicial notice of a myriad of sociological, historical, and medical claims without any record before it.

Before we get too far ahead of ourselves, we again focus on September 1971—sixteen months before the Court would actually issue its opinion creating a constitutional right to abortion.

At the time, the high court consisted entirely of men. All but one of the justices were in their sixties, seventies, or eighties. With the exception of Justice Thurgood Marshall—who was the first African-American to be appointed to the U.S. Supreme Court (four years prior by President Johnson)—all of them were white. The two oldest—85-year-old Justice Black and 72-year-old Justice Harlan—would soon resign and in fact pass away before oral arguments were conducted in Roe and Doe.

The demographics of the Court in 1971 is ironic for two reasons—both of which relate to our culture’s increasing disparagement of white men.

First, while men and women have similar views on abortion, the polls make clear that more women than men believe abortion should be illegal. In 2000, for example, an LA Times poll found that “72% [of women] believe second-trimester abortions should be illegal, compared with 58% of men.” Many men support abortion, for example, so that sex then has less strings attached (e.g., no child support if a child is aborted) or so employment policies need not be revised to accommodate for biological differences in women since women can choose abortion should they want workplace equality with men.

Second, “the abortion rate is five times higher for African-Americans than for whites.” (Douthat, NYT). “Overall, 43 percent of pregnancies among black women end in abortion.” (Trillia Newbell, “Abortion and Black Women“). It is likely because of harrowing statistics like these that African-American U.S. Supreme Court Justice Clarence Thomas recently likened the Court’s decision in Roe to that of Dred Scott—calling them “the Court’s most notoriously incorrect decisions.”

After yet another short detour, we again return to late-1971 when the two abortion cases were pending before the high court. Before President Nixon could replace the two retired Justices (Black and Harlan), the Court pressed forward with oral arguments in Roe and Doe. Roe’s most prominent attorney, Roy Lucas—a leading abortion rights activist—“feared that he had to get an abortion case up to the Court quickly, before any Nixon appointments could swing the Supreme Court conservatively.” (Forsythe). In fact, “many believed that, with Black and Harlan gone, the Court could go 4-3 in favor of abortion (Marshall, Brennan, Stewart, and Douglas).” Id. Justices Burger, Blackmun, and White were seen as more conservative.

On December 13, 1971, the Court heard oral arguments with just seven justices instead of the usual nine. Despite the complete absence of a factual record, the arguments lasted just thirty minutes per side.

Three days later, the justices met “in Conference” to vote on the cases. Justices Douglas and Brennan “led the proabortion block.” They were also the two oldest justices at the time. As reported by renowned journalist Bob Woodward in his 1979 book “The Brethren: Inside the Supreme Court”, “Douglas had long wanted the Court to face the abortion issue head on” and was prepared to render “a sweeping reading to the Constitution [i.e., creating abortion on demand] on this increasingly volatile issue.”  

Justices Douglas and Brennan merit yet another short detour. Justice Douglas was appointed to the high court in 1939 at the age of forty. He remains the longest tenured Supreme Court justice in the history of the Court. He was undoubtedly a polarizing figure. According to Justice Frankfurter, Douglas did not value judicial consistency or stare decisis (i.e., the legal principle of determining cases according to precedent). According to Judge Posner—”widely considered to be one of the most influential legal scholars in the United States”—Douglas was

“a bored, distracted, uncollegial, irresponsible” Supreme Court justice, as well as “rude, ice-cold, hot-tempered, ungrateful, foul-mouthed, self-absorbed” and so abusive in “treatment of his staff to the point where his law clerks—whom he described as ‘the lowest form of human life’—took to calling him ‘shithead’ behind his back.”

Meanwhile, Justice Brennan, like Douglas, is widely considered one of the “most liberal Supreme Court justices in American history.” Prior to oral arguments in Roe and Doe, and perhaps as far back as the mid-1960s, he and Justice Douglas were apparently plotting to create broad abortion rights. In fact, just a few weeks prior to the oral arguments, Justice Douglas had assigned Justice Brennan to write the majority opinion in the case of Eisenstadt v. Baird. There, the issue was whether states could bar unmarried individuals from using contraception.

In his opinion, Justice Brennan wrote, “If the right of privacy means anything, it is the right of the individual, married or single, to be free from unwarranted governmental intrusion into matters so fundamentally affecting a person as the decision whether to bear or beget a child.” (Emphasis added).

As noted by author and attorney Clarke Forsythe, this was ”a classic ipse dixit (‘it is true because I say so.’). It is simply an assertion of judicial will”—one that was anathema to virtually all of American legislators just a couple decades prior, let alone at the time of our founding. It is one thing to use contraception in the privacy of one’s own sex-life; it is quite another to involve a physician or abortion provider in the taking of human life in a health facility open to the public.

In a subsequent memo to Justice Douglas, Brennan wrote that this language of his in Eisenstadt would be “useful” for the opinion in Roe and Doe. “Brennan knew well the tactic of ‘burying bones’—secreting language in one opinion to be dug up and put to use in another down the road,” as noted by author and NYT contributor Edward Lazarus in his book “Closed Chambers: The Rise, Fall and Future of the Modern Supreme Court”.  

After the December 16, 1971 conference among the judges, Justice Douglas “immediately started to draft an opinion striking down [both Texas’s and Georgia’s] abortion laws.” (Forsythe). Chief Justice Burger, however—who just the year before had been influential in having President Nixon appoint his childhood friend Harry Blackmun to the bench—“had already assigned the opinion to Justice Blackmun.” (Forsythe). Douglas protested, but Burger stuck to his guns. Nonetheless, Douglas circulated his draft opinion to Brennan alone on December 22, 1971.

A week later, Douglas and Brennan conversed privately regarding their like-minded intention to enshrine very liberal abortion rights in the Constitution. The next day, Brennan sent an eleven-page letter to Douglas laying out “his views on the right of privacy and his conviction that they could use the cases to decisively set forth ‘the existence and nature of a right to an abortion.’” (Forsythe).

But just over a week later, both William H. Rehnquist and Lewis F. Powell were sworn in as the latest Supreme Court justices, after having been appointed by President Nixon a few months prior. And Nixon, who was notably anti-abortion, was set to be sworn in by Chief Justice Burger for a second term on January 20, 1972.

Perhaps not wanting to embarrass Nixon with a wildly polarizing pro-abortion decision immediately before his second term and/or wanting to give the new justices an opportunity to weigh in, Burger was happy to delay the rulings in Roe and Doe for a bit longer—much to Justice Douglas’s and Brennan’s chagrin. Perhaps this is why Burger assigned the opinion to Justice Blackmun—who “was a notoriously slow writer of opinions.” (Forsythe).

With that, we conclude Part 1 of “A Short Story of Roe v. Wade.” I intend to pick back up next week. Below is a photograph taken in January 1971 of the U.S. Supreme Court before Justices Black and Harlan resigned that September.

22 Jan 1971 — United States Supreme Court. Front row: Justice John M. Harlan, Justice Hugo L. Black, Chief Justice Warren E. Burger, Justice William O. Douglas, Justice William J. Brennan, Jr. Back row: Justice Thurgood Marshall, Justice Potter Stewart, Justice Byron R. White, and Justice Harry A. Blackmun. — Image by © Bettmann/CORBIS

Is the U.S. a Christian, New Age, or Syncretistic Nation?

I am often struck by those who think traditional Christian beliefs are irrational or silly, but simultaneously maintain a worldview for which there is little evidence. Or one that just a few decades ago would have been seen as fringe, kooky, or naïve.

For example, I recall pressing an intelligent friend of mine—who used to be a devout Christian yet now identifies as an agnostic—to provide me with her grand theory on humanity and the universe. She answered with something to the effect of, “I tend to think we are all parasites sucking the life flow out of the earth.” Or, just the other day, my hygienist described herself as spiritual but not religious—a popular term nowadays—the SBNRs.

Views such as these that are atypical historically are not limited to the Nones—a relatively new term to describe people with no religious affiliation. Rather, they are now held by a significant number of self-identifying Christians.

This may come as a surprise to some. “We are a Christian nation,” many of us declare. And on some level, that is true. See David Mark Hall’s “Did America Have a Christian Founding?” for example. Well-known pastor Robert Jeffress recently preached a sermon titled, “America is a Christian Nation,” declaring that “America is—and always has been—a Christian nation.” Or Google the search terms “the most Christian nation in the world,” and you will see the following atop the search results:  

But U.S. “Christianity” is increasingly taking on a hybrid nature. Now, many tend to think that religious syncretism (i.e., the blending of two or more belief systems) is restricted to the likes of the Caribbean (e.g., Rastafarianism), parts of Africa (e.g., Vodou) and Asia, and other less developed areas. Meanwhile, westerners fit into nice, neat categories we tell ourselves: Catholics and Protestants and, to a lesser extent, Jews, Muslims, and atheists and agnostics.

But not so fast. The evidence suggests otherwise. According to a recent Pew Research Center study titled, “‘New Age’ beliefs common among both religious and nonreligious Americans”:

roughly six-in-ten American adults accept at least one of these New Age beliefs [including reincarnation, astrology, psychics and the presence of spiritual energy in physical objects like mountains or trees]. Specifically, four-in-ten believe in psychics and that spiritual energy can be found in physical objects, while somewhat smaller shares express belief in reincarnation (33%) and astrology (29%).

That can’t be right, can it? According to a recent Pew religious landscape study, 65% of Americans still identify as Christian. How can six-in-ten Americans accept New Age beliefs, then? That adds up to no less than 125%, not even factoring in other religions or beliefs.  

Pew elaborates:

While eight-in-ten Christians say they believe in God as described in the Bible, six-in-ten [Christians] believe in one or more of the four New Age beliefs analyzed here, ranging from 47% of evangelical Protestants to roughly seven-in-ten Catholics and Protestants in the historically black tradition.

In helping its readers to understand these New Age beliefs, Pew links to Britannica’s New Age movement page, which defines it as a:

a movement that spread through the occult and metaphysical religious communities in the 1970s and ʾ80s. . . . The movement’s strongest supporters were followers of modern esotericism, a religious perspective that is based on the acquisition of mystical knowledge and that has been popular in the West since the 2nd century AD, especially in the form of Gnosticism.

Per Britannica, its ideas include:

First, . . . a New Age of heightened spiritual consciousness and international peace would arrive and bring an end to racism, poverty, sickness, hunger, and war. . . . Second, . . . . a foretaste of the New Age through their own spiritual transformation. Initial changes would put the believer on the sadhana, a new path of continual growth and transformation.

Pew’s findings are less surprising when considering that the percentage of religiously unaffiliated (the Nones) has increased from 16% to 26% from 2007 to 2019. That is a remarkable spike like this country has never seen before. The percentage of self-identifying Christians dropped from 78% to 65% over that same time. In 1990, 85% of Americans identified as Christians per Wikipedia. In 1960, that figure was 92% according to Gallup.

According to a study released by non-partisan PRRI (Public Religion Research Institute) in 2021, “the share of the [U.S.] population identifying as white evangelical [dropped] from 23 percent in 2006 to 14.5 percent [in 2020].”

Pew’s and PRRI’s research comports with that of various Christian polling groups. In 2015, OmniPoll conducted a study that found that just 17% of practicing Christians have a biblical worldview—as defined by The Barna Group, an evangelical polling firm in California.

Barna defined “biblical worldview” to simply include beliefs that (1) there is absolute moral truth; (2) the Bible is inerrant in all the principles it teaches; (3) Satan is a real figure, not symbolic; (4) a person cannot earn their way to heaven; (5) Jesus lived a sinless life; and (6) God is an all-knowing, all-powerful creator of the world who still rules the universe today. This definition is what C.S. Lewis might describe as Mere Christianity. How is it that just 17% of practicing Christians held to these beliefs?

Barna had previously conducted a study in 2003 that found that while 51% of U.S. adults claimed to possess a biblical worldview, just 4% did.

In 2017, Barna conducted another study among Christians to gauge “how much of the tenets of other key worldviews—including new spirituality, secularism, postmodernism and Marxism—have influenced [their] beliefs.” The results were summarized as follows:

61% agree with ideas rooted in New Spirituality.
54% resonate with postmodernist views.
36% accept ideas associated with Marxism.
29% believe ideas based on secularism.

With respect to New Spirituality, Barna found that almost 30% of practicing Christians believe that “all people pray to the same god or spirit, no matter what name they use for that spiritual being” and that “meaning and purpose come from becoming one with all that is.” In summary, over 60% “of practicing Christians embrace at least one of the ideas rooted in New Spirituality.”

With respect to Post-Modernism, 37% of practicing Christians under the age of forty-five believe that “what is morally right or wrong depends on what an individual beliefs.” Twenty-nine percent believed that “if your beliefs offend someone or hurt their feelings, it is wrong.” (How the heck do you reconcile those two beliefs?).

With respect to Marxism, 30% of practicing Christians under forty-five years old believe that “the government, rather than individuals, should control as much of the resources as necessary to ensure that every gets their fair share.”

Earlier this year, Arizona Christian University’s Cultural Research Center conducted an extensive study that found that while 69% of Americans identify as Christians and 35% identify as born again Christians, just 6% have a biblical worldview (defined as being similar to the above).

Of the 6%, a significant number held other beliefs that historically are not reconcilable with orthodox Christian beliefs. For example, 52% believe “people are basically good.” Forty-two percent believe “that having faith matters more than which faith you pursue.” And 39% do not believe the Holy Spirit is real, but rather “merely a symbol of God’s power, presence, or purity.”

Among the 69% of Americans who identified as Christian, only 53% thought “telling a falsehood of minor consequence in order to protect their personal best interests or reputation” is morally unacceptable. Just 55% thought that “having an abortion because their partner has left and the parent knows they cannot reasonably take care of the child” was morally unacceptable. And only 32% thought pre-marital sex “with someone you love and intend to marry in the future” was unacceptable.

What are we to make of all these findings? First, orthodox Christianity is clearly on the decline. Second, the momentum appears to be unstoppable. Third, the U.S. is not currently a Christian nation in any true sense of the term and is more accurately described as syncretistic.

Fourth, we need to be honest with ourselves with what we believe, explore the roots of our beliefs, and then wrestle with the evidence for and against them.

Here’s cultural commentator Jackie Hill Perry describing what the Gospel of Jesus is in six-minute long segment.

Abortion: Texas, Dobbs, and IAQs (Infrequently Asked Questions)

“If the unborn is not a human person, no justification for abortion is necessary.
However, if the unborn is a human person, no justification for abortion is adequate.” -Gregory Koukl

While I would like to nuance this quote a little bit, I agree with it principally and think it cuts to the heart of the abortion debate. As many of you know, abortion is a particularly hot topic right now—perhaps as hot as it has been since Roe v. Wade was decided forty-eight years ago. So, if there is ever a time to beef up on your grasp of the subject, it may be now.

Before I list and answer some interesting questions on the matter, I will briefly explain how abortion is once again near the center of the national stage and what is at stake in the months ahead.

The Texas Abortion Law Case

This past Monday, the Supreme Court heard close to three hours of oral arguments regarding Texas’s recent abortion law. Commonly misunderstood, the Texas law makes it illegal to perform an abortion, and to aid and abet anyone in having an abortion, after cardiac activity is detected. This typically occurs around the six-week mark.

To be clear, the law does not criminalize the prohibited behavior. Rather, it imposes civil liability on abortion providers and those who aid and abet women having an abortion, but not the women themselves. That is, women cannot be sued under the law; however, it is undeniable that women are effectively prevented from having abortions after the six-week mark in most instances. Exceptions exist for the life or health of the mother, but not for rape or incest.

On its face, the law clearly runs afoul of Roe v. Wade. The Texas legislature, however, attempted to circumvent Roe by banning state officials from enforcing the law. Instead, it tasked citizens to do so—regardless of their connection, or lack thereof, to the abortion or the woman having it.

In response to the law, abortion providers in Texas sued the state and quickly petitioned the U.S. Supreme Court to block the law on the basis that it contravened Roe. The high court, however, declined to do so on the grounds that Texas may not be the right party to sue since it did not have enforcement authority; rather, as noted above, the citizens possessed the authority.

Not to be deterred, abortion providers, as well as the federal government, re-petitioned the Supreme Court to block the law. This time, they argued that Texas could not lawfully immunize itself from a lawsuit by leaving enforcement strictly to its citizens.

This was the issue primarily before the Court during Monday’s oral arguments, not abortion rights themselves. Many would have found the arguments dull or esoteric. During arguments, for example, “a 1908 case called Ex Parte Young kept coming up.” (NYT, “What is Ex Parte Young, much-discussed in the Texas abortion case?”).

The Mississippi Abortion Law Case

Exactly one month after oral arguments in the Texas case, the U.S. Supreme Court will hear oral arguments in a case out of Mississippi styled Dobbs v. Jackson Women’s Health Organization. In that case, the issue is whether Mississippi’s ban on abortions after the fifteenth week of pregnancy except in cases of medical emergencies or fetal abnormalities is unconstitutional. According to Roe and its progeny, it undoubtedly is. The lower courts ruled as much, including the very conservative Fifth Circuit. So, why did the U.S. Supreme Court take the case, then? Many think it is because “Roe Is as Good as Gone,” as one recent NYT headline put it. A sizeable contingent of pro-lifers, however, are not so sure (myself included). We have been disappointed before.

With that, let me pose and answer some interesting questions about abortion.

What did Roe actually hold? The issue in Roe was whether the U.S. Constitution provides a right to an abortion. Prior to Roe, there was no such right. In Roe, the U.S. Supreme Court ruled that women do in fact have a right to an abortion by virtue the Fourteenth Amendment’s Due Process Clause and, more specifically, the fundamental “right to privacy” that the Court had previously inferred from it.

In addition to recognizing a woman’s right to choose, Roe also recognized the states’ “legitimate interests in protecting both the pregnant woman’s health and the potentiality of human life.” With this in mind, the Court created a trimester framework. Under the framework, the state’s interest in protecting the women’s health was recognized at the outset of the second trimester. Its interest in protecting pre-natal life was recognized at the outset of viability (i.e., often near the start of the third trimester).

During the first trimester, then, women would have an absolute right to an abortion that could not be regulated. Beginning with the second trimester, women would still have a right to an absolute right to an abortion; however, the state could impose regulations (not restrictions) if they were reasonably related to the mother’s heath. Beginning with viability, which the Court noted “is usually placed at about seven months (28 weeks) but may occur earlier, even at 24 weeks,” the state could regulate or even ban abortion except where necessary to preserve the mother’s life or health.

Is Roe still the standard? For the most part. In the landmark 1992 case of Planned Parenthood v. Casey, the Court affirmed Roe’s “central holding” in a “bitter 5-to-4 decision.” In other words, it affirmed that “that viability marks the earliest point at which the” a state can prohibit abortion.” Casey, 505 U.S. at 835. However, it also ruled that “Roe’s rigid trimester framework is rejected.” Id. at 837. Moreover, it held that a state has a legitimate interest “in potential life throughout pregnancy,” not just beginning in the second trimester. Id.

In doing so, the Court imposed a new “undue burden” standard. In other words, while the states could adopt regulations to, say, ensure that the woman’s choice is informed by subjecting her to a 24-hour waiting period, any such “measures must not be an undue burden” on her right to an abortion up until viability. Id. at 878.

It is important to note, that neither Roe nor Casey required states to regulate or ban abortion post-viability. In fact, several states permit late-term abortions, a few without restriction.

What was the state of abortion law immediately prior to Roe? It varied state to state. According to the pro-choice Guttmacher Institute, “legal abortions were already available in 17 states under a range of circumstances beyond those necessary to save a woman’s life.”

What was the state of abortion law in early-America? It appears to have mirrored English common law, which criminalized abortions after quickening, i.e., the first movement in utero. Roe, 410 U.S. at 132. At the time, quickening was thought to have occurred around the four-month mark. Id. This being said, in those days “abortion was extremely rare and unmarried women facing crisis pregnancies could rely on society and the courts to force the father into doing the right thing.” (World News Magazine, “Did Colonial America have abortions? Yes, but …”).

In 1803, England made it a capital crime to abort a quick fetus and “provided lesser penalties for the felony of abortion before quickening.” In America, Connecticut was the first state to enact prohibitive abortion legislation. It did so in 1821. New York was the second to do so in 1828, except for instances jeopardizing the mother’s life. “By the end of the 1950’s,” however, “a large majority of [states] banned abortion . . . unless done to save or preserve the life of the mother.” Roe, 410 U.S. at 140.

If Roe is overturned, what would then be the state of abortion law? It depends. If Roe was overturned in its entirety, the power to govern abortion would lie entirely with the states. Put differently,“[t]he States [then] may, if they wish, [continue to] permit abortion on demand, but the Constitution [would] not require them to do so.” Casey, 505 U.S. at 979 (Scalia, J., dissenting).

According to the pro-choice Guttmacher Institute, “26 States Are Certain or Likely to Ban Abortion Without Roe.” The article is highly misleading because the word “ban” is implicitly defined to mean virtually any restriction on abortion. For example, under the heading “States Likely to Ban Abortion,” it cites to Montana because it recently restricted “abortion at 20 weeks of pregnancy.” The articles notes that just five states would have a “Near-total ban.” According to pro-life attorney David French, “ending Roe would cut nationwide abortions by less than 13%.” (National Review, “In a Post-Roe World, Pro-Lifers Would Still Have a Lot of Work Left to Do”).

What is the state of abortion law in Europe? It is apparently far more conservative than it is in the U.S. According to the Lozier Institute, “47 out of 50 European nations limit elective abortion prior to 15 weeks,” whereas “zero out of 50 U.S. states have currently enforceable limits on abortion at 15 weeks.” Moreover, “[n]o European nation allows elective abortion through all 9 months of pregnancy, as is effectively permitted in several U.S. states, including California, Massachusetts, Maryland, and New York.”

While those of a pro-choice persuasion may view the Lozier Institute as an undependable source, its findings appear to comport with mainstream or liberal sources. See, for example, Wikipedia’s “Abortion law” page, which cites to Oxford University Press when stating, “When it comes to later-term abortions, there are very few [countries in Europe] with laws as liberal as those of the United States.” See also EURACTIV, “Abortion rights: An open wound in many European countries” (“Most EU countries allow abortion on demand up to 10 or 14 weeks of pregnancy, including France, Belgium, Denmark, and Greece”).

What is the state of abortion law in the rest of the world? According to a University of Georgia Law Professor, who clerked for Justice Anthony Kennedy, the U.S is “one of only six nations on the list allow[ing] unrestricted abortion to the point of viability.” The other five are “Canada, China, Netherlands, North Korea, . . . and Vietnam.” This should shock the conscience. Many countries ban abortion except when necessary to preserve the mother’s life. “Performing an abortion because of economic or social reasons is accepted in 37% of countries. Performing abortion only on the basis of a woman’s request is allowed in 34% of countries.” (Wikipedia, “Abortion Law” (citing 2020 U.N. report)).

What are the primary reasons women have abortions? According to a study published by the Guttmacher Institute, the results were as follows:

The reasons most frequently cited were that having a child would interfere with a woman’s education, work or ability to care for dependents (74%); that she could not afford a baby now (73%); and that she did not want to be a single mother or was having relationship problems (48%). . . . In both surveys, 1% indicated that they had been victims of rape, and less than half a percent said they became pregnant as a result of incest.

When do women have abortions during the gestational period? According to the CDC’s Abortion Surveillance 2018 report, “A total of 619,591 abortions for 2018 were reported to CDC from 49 reporting areas.” The Guttmacher Institute, however, reports significantly more abortions per year (in 2017, they reported 862,320 compared to the CDC’s 612,719). As to when they occurred, we will use the CDC’s findings coupled with the Guttmacher Institute’s numbers:

  • 77.7%: performed at ≤9 weeks’ gestation (670,022)
  • 14.5%: performed at between 10–13 weeks’ gestation (125,036)
  • 6.9%: performed at 14–20 weeks’ gestation (59,500)
  • 0.9%: performed at ≥21 weeks’ gestation (7,760)

According to NCBI, “a fetus resembles the mature human form at about week 9 of gestation during embryogenesis.” Click here for Google images of fetuses at nine weeks’ gestation.

Did the Hippocratic Oath address abortion? A Texas abortion provider recently said that the restrictive Texas abortion law “has made [abortion] providers fearful of being sued for treating patients and holding true to the Hippocratic oath.” Perhaps she meant some new or Texas version of the Hippocratic Oath. The original Oath, as noted by the Supreme Court in Roe:

varies somewhat according to the particular translation, but in any translation the content is clear: “I will give no deadly medicine to anyone if asked, nor suggest any such counsel; and in like manner I will not give to a woman a pessary to produce abortion,” or “I will neither give a deadly drug to anybody if asked for it, nor will I make a suggestion to this effect. Similarly, I will not give to a woman an abortive remedy.”

How do Planned Parenthood, anti-racism, and pro-choice intersect? Planned Parenthood is by far the leading provider of abortions in the U.S. By some accounts, it conducts approximately 35–45% of all U.S. abortions annually. (See Heritage Foundation, “Planned Parenthood Sets New Record for Abortions in a Single Year”).

According to a recent NYT article by Ross Douthat, the organization, however, “has eugenic ideas close to its root, and while [its founder Margaret] Sanger herself was pro-contraception rather than pro-abortion, her successors championed both abortion rights and global population control policies that were racist by any reasonable definition.”

Douthat goes on to note that after the legalization of abortion in the U.S., “white births dipped only slightly . . . while the nonwhite birthrate dropped by 15 percent.” Today, “the abortion rate is five times for African Americans than for whites.”

Douthat is far from alone in his observations. In a 2020 article published by the National Center for Biotechnology Information (NCBI), it is noted that:

Black women have been experiencing induced abortions at a rate nearly 4 times that of White women for at least 3 decades, and likely much longer. The impact in years of potential life lost, given abortion’s high incidence and racially skewed distribution, indicates that it is the most demographically consequential occurrence for the minority population. The science community has refused to engage on the subject and the popular media has essentially ignored it. In the current unfolding environment, there may be no better metric for the value of Black lives.

In fact, “[a]ccording to the Department of Public Health of every state that reports abortion by ethnicity, black women disproportionately lead in the numbers. For example, in Mississippi, 79 percent of abortions are obtained by black women; in Washington, D.C., more than 60 percent; in Georgia, 59.4 percent.” (Center for Urban Renewal and Education, “The Effects of Abortion on the Black Community”).

According to leading anti-racist activist Dr. Ibram X. Kendi in his book “How to Be an Antiracist” (which according to Publishers Weekly was the fourteenth best-selling book of 2020), every policy is inherently racist or anti-racist. There are no neutral policies. More specifically, he writes:

A racist policy is any measure that produces or sustains racial inequity between racial groups. An antiracist policy is any measure that produces or sustains racial equity between racial groups. . . . There is no such thing as a nonracist or race-neutral policy. Every policy in every institution in every community in every nation is producing or sustaining either racial inequity or equity between racial groups.

Thus, it very much appears that the policy of abortion is systemically racist, at least by Dr. Kendi’s definition. It appears to produce as large of, if not a greater, disparate impact on African-Americans than incarceration and police brutality. Perhaps this is part of the reason why African-American Supreme Court Justice Clarence Thomas recently drew parallels between Roe and the abominable Dred Scott decision.

Is there a consensus on when life begins? In Roe, the state argued that its interest in protecting pre-natal life existed throughout a pregnancy, not merely after viability. The Court addressed this contention by writing:

We need not resolve the difficult question of when life begins. When those trained in the respective disciplines of medicine, philosophy, and theology are unable to arrive at any consensus, the judiciary, at this point in the development of man’s knowledge, is not in a position to speculate as to the answer.

Roe, 113 U.S. at 159. This is an interesting remark for multiple reasons. First, courts exist for the very purpose of resolving disputed questions, including very complex ones. In a countless number of cases—involving a myriad of different fields of expertise—opposing litigants will proffer conflicting expert testimony. That is, one side’s expert says this, while the other side’s expert says that. Courts, then, are required to sort through the conflicting testimony, make credibility determinations, and arrive at a conclusion—whether in the fields of medicine, engineering, law, etc.

Second, if there was no consensus of opinion as to when life began when Roe was decided in 1973, there very much appears to be a consensus now. As far back as 1981, after experts in embryology and human development testified before a U.S. Senate Judiciary Committee, the committee reached the following conclusion:

Physicians, biologists, and other scientists agree that conception marks the beginning of the life of a human being—a being that is alive and is a member of the human species. There is overwhelming agreement on this point in countless medical, biological, and scientific writings.

Peter Singer, a pro-choice Princeton University bioethicist and philosopher, also agrees:

[T]here is no doubt that from the first moments of its existence an embryo conceived from human sperm and eggs is a human being. . . [T]he same is true of the most profoundly and irreparably intellectually disabled human being , even of an encephalic infant—that is, an infant that, as a result of a defect in the formation of the neural tube, has no brain.

This is because, as stated by embryologist E.L. Potter, “Every time a sperm cell and ovum unite, a new being is created which is alive and will continue to live unless its death is brought about by some specific condition.” Unlike a sperm or an egg or a cell, the embryo is developing into a mature human being.

In a 2008 abortion case styled Planned Parenthood v. Rounds, the Eight Circuit found that it was not an undue burden on women to require abortion providers to state that the fetus is a “living, separate, whole human being.” The court noted that this is a biological fact. For support, the Court highlighted that even Planned Parenthood’s very own expert witness, bioethicist Paul Root Wolpe, Ph.D. of the Emory Center of Ethics, executed an affidavit stating:

To describe an embryo or fetus scientifically and factually, one would have to say that a living embryo or fetus in utero is a developing organism of the species Homo Sapiens which may become a self-sustaining member of the species if no organic or environmental incident interrupts its gestation.

In closing, we need to seriously ask ourselves if true social justice is standing for women’s rights or the unborn’s rights. As Mother Teresa once noted, “Abortion is profoundly anti-women. Three quarters of its victims are women: half the babies and all the mothers.”

In closing, here a link to a short video featuring adorable Richard Scott William Hutchinson, “who holds the world record for being the most prematurely delivered baby to survive.” He weighed 11.9 ounces and was born at a gestational age of 21 weeks, 2 days. A few months ago, he joyfully celebrated his first birthday.

Do(n’t) Judge a Culture By Its Billboard Hot 100

Earlier today, the NYT published a guest essay by transgender activist Jennifer Finney Boylan. The title of the essay is “Should Classic Rock Songs Be Toppled Like Confederate Statutes?” Therein,  Finney muses about whether we should discontinue listening to classic rock songs by the likes of Don McLean, Johnny Cash, Elvis, Eric Clapton and others because of the past “sins of [these] historical figures.” “In other words: The problem with [their music] isn’t the song[s]. It’s the singer[s].” Finney then cites to “racist rants,” “anti-vaccination activism,” and prior sex abuse offenses in some of their pasts.

Finney does not conclusively answer the question posed, but closes by writing, “reconsidering those songs, and their artists, can inspire us to think about the future and how to bring about a world that is more inclusive and more just.”

While I have been a mild classic rock fan since my freshman-year college roommate turned me onto the genre, I am no fanatic. So, I don’t take Finney’s criticism to heart. I do not doubt that some of its stars have very checkered, even degenerate pasts. I think Finney, however, is barking up the wrong tree and ignoring the elephant in the (musical) room. I will start with a personal anecdote.

A couple weeks ago, I signed up for a ten-pack of classes at Orange Theory—a high-intensity interval training (HIIT) workout consisting of rowing, cardio, and strength training. Despite branding its workouts as physically grueling and not for the faint of heart, the class was compromised mainly of middle-aged folks. Approximately eighty percent were women. The fitness studio is located in an upper-middle class suburb about thirty minutes outside of D.C.

Not long into my first hour-long class, I was struck by the explicit lyrics of some of the songs. As you might imagine, the music was playing loudly. One such song, I would later find out, is titled Hood Go Crazy. It is littered with misogynistic, sexually graphic lines and contains eighteen 4-letter words, including the worst ones. Make no mistake, we were not listening to the “clean version.” The song is by an artist known as 2 Chainz. He has a criminal record that includes a felony drug possession charge, and he has apparently been sued for assault for one incident and harassment for another.  

Later in the class, an even more sexually explicit song came over the loudspeakers. The B-word was used repeatedly, and the theme of the song seemed to be that the artist could treat women as poorly as he pleased and yet still get with as many as he wanted to.

It is worth reiterating that I was not taking this fitness class with high school students or twenty-something year-olds, which would have been bad enough. Rather, I was surrounded primarily by female Gen Xers and older Millennials (i.e., thirty to fifty year-olds). Call me old-fashioned, but the experience felt surreal as we ran, rowed, and lifted weights together while listening to those terribly misogynistic, degrading songs. (Several of the songs were fine, thankfully).

This reminded me of an experience I had last summer when at my local high school’s track for a run. I posted about the experience on my Facebook page shortly afterwards:

Last week, I went to the track of the high school in my new neighborhood, Oakton, VA (a relatively diverse upper middle class suburb 20 miles outside of DC), to go for a run. On the field inside of the track, a diverse group of 15-20 girls (presumably high schoolers) were playing soccer, along with what appeared to be at least one coach. On some level, it was a beautiful picture, as the girls were a mixture of black, white, and perhaps Latino/Middle Eastern. When I arrived, however, a loud speaker was blaring a song I would later discover is titled, “I Don’t Give a F*ck About You” by Big Sean (on YouTube, three music videos of the song have a collective 442 million — yes, million — views!). As I and several others ran or walked around the track, the chorus thundered several times over at an estimated decibel level of 110+:

I don’t f*** with you
You little stupid *ss b****, I ain’t f***in’ with you
You little, you little dumb *ss b****, I ain’t f***in’ with you
I got a million trillion things I’d rather f***in’ do
Than to be f***in’ with you
Little stupid *ss, I don’t give a f***, I don’t give a f***
I don’t I don’t I don’t give a f***
B**** I don’t give a f*** about you or anything that you do

Big Sean has a criminal record that includes charges of  third-degree sexual assault and unlawful imprisonment.

How are we to think about these things? As someone once said, “If one should desire to know whether a kingdom is well governed, if its morals are good or bad, the quality of its music will furnish the answer.” (Credited to Confucius). The quality of a significant chunk of American music is rubbish.

The examples I provide above are not a departure from what is normal in our culture. They are the new normal. No longer relegated to nightclubs and MTV, shamelessly obscene songs are creeping into the likes of popular fitness studios that appeal to Gen Xers.

As for another example of the degradation of our music industry, take President Biden’s pre-election interview with Cardi B, the female rapper with the most number-one singles of all time on the Billboard Hot 100. This is an astonishing fact considering she released her debut studio album just three years ago in 2018. Prior to that, she had released two mixtapes—titled Gangsta B**** Music Vol. 1 and Vol. 2. (Asterisks added).

On August 7, 2018, Cardi B released arguably one of the most sexually graphic music videos of all time—at least among those in the mainstream. The song is titled WAP, which stands for something that is far too sexually explicit for me to comfortably share in this post, even with asterisks. The song opens with the lines “Whores in this house, There’s some whores in this house (x3).” It contains several four-letter words. Again, the music video is pornographic. The song was no. 1 on the Billboard Hot 100 for at least four straight weeks. It has close to one billion views on YouTube alone. It is the tenth most streamed song of all time on Spotify.

Ten days later after the release of WAP, President Biden sat down for an interview with Cardi B. During the interview, he told Cardi B that his daughter is “a fan of [hers]” and that she would call him “Joey B” as a play off of Cardi B. President Biden also congratulated Cardi B for her success. Meanwhile, Cardi B repeatedly addressed him simply as “Biden”—i.e., no use of Mr., Vice President, Senator, or Joe.

Let us now move on the current Billboard Hot 100. Fortunately, Adele’s Easy on Me is at no. 1. But it’s virtually all downhill from there. At no. 2, is Stay by The Kid LAROI and Justin Bieber. It contains four F-bombs. Here’s a sampling of its lyrics:

I get drunk, wake up, I’m wasted still
I realize the time that I wasted here
I feel like you can’t feel the way I feel
Oh, I’ll be f***ed up if you can’t be right here

Oh-oh-oh-whoa (oh-oh-whoa, oh-oh-whoa)
Oh-oh-oh-whoa (oh-oh-whoa, oh-oh-whoa)
Oh-oh-oh-whoa (oh-oh-whoa, oh-oh-whoa)
Oh, I’ll be f***ed up if you can’t be right here

At no. 3 is Industry Baby by Lil Nas X & Jack Harlow. Lil Nas X identifies as queer. In advance of another 2021 song of his titled “Montero,” the controversial “Satan Shoes” were ostensibly released by Nike to promote its music video. (In reality, they were released by another company, which led to a lawsuit from Nike.) The music video for Industry Baby is pornographic, as Lil Nas X and a handful of apparently queer background dancers parade around naked in a large, open shower for much of the video. Later in the video, a female appears in sexually explicit attire, or lack thereof.

Moving on to no. 4 we find Fancy Like by Walker Hayes. It appears to be a run-of-the-mill country song. It opens with:  

Ayy
My girl is bangin’
She’s so low maintenance
Don’t need no champagne poppin’ entertainment
Take her to Wendy’s
Can’t keep her off me
She wanna dip me like them fries in her Frosty

In Hayes’ defense, he is married and is presumably referring to his wife, so the song could (perhaps should) be perceived as endearing or cute.

Next, we have English pop-star Ed Sheeran’s Bad Habits. In the music video, Sheeran resembles a combination of the Joker from Batman and a zany or queer late-night television show host. The lyrics open on an implicitly sexual note:

Every time you come around, you know I can’t say no
Every time the sun goes down, I let you take control
I can feel the paradise before my world implodes
And tonight had something wonderful

Moving to no. 6, we have the most vulgar of the lot: hip-hop artist Drake’s Way 2 Sexy, feat. Future and Young Thug. Drake has been extremely popular for years. In fact, he has four more number-one albums (ten) than Michael Jackson (six). Many of his songs are very sexually explicit. This one is no exception. The music video opens in soft-core porn fashion. The lyrics include the F-word, the N-word, the B-word, and are clearly misogynistic. I do not feel it is appropriate to provide any further detail here.

With no. 7, we’re back to Ed Sheeran, this time to a song called Shivers. It’s basically about infatuation to the Nth degree (“I can’t get enough,” “[you] give me the shivers,” “Baby, you burn so hot”). It is about what very much sounds like a superficial, hyper-sexualized relationship.

To round out the top ten, we have Good 4 U by Disney actress Olivia Rodrigo. Despite the fact Olivia is just eighteen years old, the song contains an F-bomb. It is a resentful lament of a heartbroken girl in the aftermath of a breakup with a manipulative, hard-hearted ex-boyfriend. Arguably, it lacks redeeming value. Next, we have Need to Know by Doja Cat, which is all about unadulterated sex, contains several F-words and an N-word, and features the artist half-naked throughout the music video. Finally, we have no. 10—Dua Lipa’s Levitating. It, too, is about casual sex and features the artist in multiple provocative outfits.  

In summary, the current top ten of our Billboard Hot 100 features fourteen F-bombs, eleven B-words, nine N-words; near countless references to sex (most of them unconnected to love and all of them apparently unconnected to marriage); and three quasi-pornographic music videos that just twenty-five years ago would have been completely taboo (among several others that are palpably sexualized). Several of these artists have criminal records of some kind.  The song that our culture would likely view as the most redeeming—Adele’s Easy on Me—is apparently about Adele’s divorce with her ex-husband. Sadly, the lyrics note that she was willing to “give up” on “putting you both first”—an apparent reference to her ex-husband and their nine-year-old son.

When juxtaposed to Finney’s complaints over Eric Clapton’s “anti-vaccination activism,” the comparison of “a mountain to a mole hill” comes to mind. The NYT should be railing against much of the music of our day. Instead, it often celebrates the artists who perpetuate or accelerate the misogyny, sexism, and moral degradation rampant in parts of the industry. See a recent NYT article, for example, that states “things appear to be evolving in a more progressive direction” because “Nicki Minaj,” “Cardi B and Megan Thee Stallion have become the new elders.” All of these female rap artists, however, “perpetuate the patriarchal and misogynistic values which have always been at the heart of hip-hop” with their boundary-pushing racy lyrics and music videos. (Varsity, Cambridge University Student Newspaper). Many of their music videos contain extremely sexually explicit material.

I wonder what our founding fathers would think about our culture in this respect. The answer appears to obvious when reflecting on quotes like this one from George Washington: “The foolish and wicked practice of profane cursing and swearing is a vice so mean and low that every person of sense and character detests and despises it.” God only knows what the likes of Washington would have thought about many of the music videos referenced above. He would probably roll over in his grave.

While virtually no one today would completely concur with Washington’s quote above, more would agree with this one: “Vulgarity is like a fine wine: it should only be uncorked on a special occasion, and then only shared with the right group of people.” (James Rozoff). Something similar could be said of sexual activity, i.e., with the right person in the right context (marriage).

Our culture would be a much better one if we treated profanity and sexuality accordingly. Perhaps songs like the one below would be more prevalent.  

“Dune,” “Daniel,” and Our Lure-Lame Relationship with Prophecy

This past Thursday, I watched the newly released, critically acclaimed movie Dune: Part One. Despite having high expectations, I thoroughly enjoyed it. Three days prior, I had my regularly scheduled men’s group, where five friends of mine and I have been studying the biblical Book of Daniel for the past six months. Initially, I was not terribly excited about our book choice.

How do these two seemingly unrelated experiences correlate with each other? Well, interestingly, both Dune and Daniel put a premium on prophecy, including on a prophesied savior. In our culture, Dune’s appeal to prophecy is alluring, while Daniel’s is often seen as pointless, lame, or worse. Why is that?

From here, I will provide a brief (non-spoiling!) overview of Dune and allude to its intriguing parallels with Christianity; tie in C.S. Lewis’s conversion experience and his view that Christianity is a “true myth”; and, finally, provide evidence to support the claim that the first part of the messianic prophecy contained in Daniel chapter 9 has been fulfilled. Here goes nothing—or something.

Based on a 1965 book, Dune opens in the distant future in the year 10191. Different people groups inhabit different planets. A dark, unrevealed emperor governs the galaxy. Arrakis—a desert planet with a dangerous climate—contains a prized spice named melange. The spice is critical to interstellar travel and, for some, the ability to foretell the future. Arrakis has long been inhabited by its native Fremen—a North African or Middle Eastern-looking people.

For the past eighty years, however, Arrakis has been ruled and exploited by a foreign people group named the Harkonnens. This had been at the emperor’s orders so the Harkonnens could harvest the spice, primarily for the benefit of the emperor. Despite the fact Arrakis’s climate is brutally hot and prone to deadly sandstorms, the Harkonnens were happy to be there because the spice is a source of astronomical wealth. They are a cruel, insatiable people. They oppress the native Freman without mercy.

Within a few minutes of the movie’s opening, it is announced that the emperor has ordered the Harkonnens to vacate Arrakis and for another foreign people, the Atreides, to replace them. The Harkonnens are indignant; however, being inferior to the emperor, they have no choice but to depart, at least for now.

Unlike the Harkonnens, the Atreides are an honorable, free people who reside on a lush, oceanic planet named Caladan. They have no desire to govern Arrakis, but as loyal subjects of the emperor, they oblige the call. The Atreides are governed by a just leader, Duke Leto Atreides I, and his supportive yet independent-minded concubine, Lady Jessica. They have a teenage son named Paul Atreides who is heir to the throne.

Paul, who is the film’s main character, is honorable like his father; however, unlike his father, he has a mystique about him. He also possesses an empathy for the oppressed, an uncanny ability to see his own people’s blind spots, and insight into the future through visions and dreams. Paul inherited many of these characteristics from his mother, who was born to the Bene Gesserit—a quasi-religious entity whose goal is to pave the way for the “Kwistaz Haderach.” This term refers to a messiah figure who will bring all of humanity to a higher plane.

The Freman, meanwhile, are a deeply spiritual people whose never-ending fight against oppression has shaped their identity. They also rely heavily on oral tradition and even more so on prophecy. Their chief prophecy is that a “Lisan al Gaib”—a term meaning savior—will come from another world and deliver them from their bondage.

I will refrain from providing any more details, but for those of you familiar with Christianity the parallels between the two are not trivial (and yet materially different at points, too). I am not sure where the plot goes in Part Two (sadly, it is not set to be released until October 2023). It may depart significantly from its similarities to Christianity. But this is beside the point: Dune is widely beloved—not despite its prophecy, but because of it. It is no. 131 on IMDb’s all-time Top Rated Movies list. The vast majority of Rotten Tomatoes’ Top Critics gave it glowing reviews. And it had one of the highest box office openings since the pandemic began.

Moreover, Dune is not an aberration among blockbuster hits with respect to its focus on prophecy. Other beloved movies that centered around the prophetic include The Matrix series, The Lord of the Rings trilogy, The Chronicles of Narnia series, Harry Potter series, The Terminator series, Arrival, and a host of others.

We love movies like these because we long for ultimate justice and redemption—which the prophesies promise. However, deep down, we know that we will never experience it in the natural. National or even global disaster seems possible, if not probable, on multiple levels. Yet our desire for absolute redemption remains. For these reasons, I am reminded of the famous C.S. Lewis quote from his book Mere Christianity:

If we find ourselves with a desire that nothing in this world can satisfy, the most probable explanation is that we were made for another world. If none of my earthly pleasures satisfy it, that does not prove that the universe is a fraud. Probably earthly pleasures were never meant to satisfy it, but only to arouse it, to suggest the real thing.

Before transitioning to Daniel, we will stay on Lewis for a bit. For those of you unfamiliar with him, he was an Oxford-educated author and professor who was an atheist early in life. He would later become the Chair of Medieval and Renaissance Literature at Cambridge University. He was a contemporary and friend of J. R. R. Tolkien, The Lord of the Rings author.

As his conversion story goes, in the late-summer of 1931, Lewis began an extended dialogue with Tolkien and another academic named Hugo Dyson. At the time, Lewis was thirty-two years old and had just started to believe in God generally, leaving his atheistic beliefs behind. Nonetheless, he had not yet embraced Christianity. Tolkien and Dyson were both Christians and a bit older than Lewis. Their dialogue with Lewis took place during strolls around Oxford’s campus, over dinner, and in Lewis’s study, among other places. The subject matter generally was Christianity, metaphor, and myth—indicative of where Lewis’s doubts lied.

Lewis quickly came to find Tolkien and Dyson to be compelling. Writing to a longtime friend about a month after their conversation began, Lewis wrote:

Now what Dyson and Tolkien showed me was this: that if I met the idea of sacrifice in a Pagan story I didn’t mind it at all: again, that if I met the idea of a god sacrificing himself to himself . . . I liked it very much and was mysteriously moved by it: again, that the idea of the dying and reviving god (Balder, Adonis, Bacchus) similarly moved me provided I met it anywhere except in the Gospels. The reason was that in Pagan stories I was prepared to feel the myth as profound and suggestive of meanings beyond my grasp even tho’ I could not say in cold prose ‘what it meant’.

Now the story of Christ is simply a true myth: a myth working on us in the same way as the others, but with this tremendous difference that it really happened.

Within a couple weeks of when his ongoing dialogue with Tolkien and Dyson began, Lewis embraced Jesus as his divine savior. Amusingly, this occurred while he was “riding in his older brother’s motorcycle sidecar on the way to the newly opened Whipsnade Park Zoo in Bedforshire.” (The Gospel Coalition). Lewis recounted the event:

I know very well when, but not how, the final step was taken.

I was driven to Whipsnade one sunny morning. When we set out I did not believe that Jesus Christ is the Son of God, and when we reached the zoo I did.

Yet I had not exactly spent the journey in thought.

Now, I do not believe the “true myth” of which Lewis was speaking was merely Jesus’s birth, death, resurrection, and offer of atonement for our sins. I believe it was also the prophecy underlying it. With that, we now turn to Daniel, who, among other biblical books, prophesied that an “Anointed One” would come. (Dan. 9:25). Daniel further prophesied that this would occur at a specified point in time and that the “Anointed One” would thereafter “be put to death and [] have nothing.” (Dan. 9:26). Many scholars believe this is a clear reference to Jesus, as do I. To lend credibility to this claim, we need to understand the context of Daniel and Israel at large.

By way of background, the biblical figure Daniel was a Jew believed to have been born in the neighborhood of 620 B.C. His parents were contemporaries of Jeremiah—one of the major prophets of the Jewish and Christian Bibles. Before continuing with Daniel’s story, it is important to take a jaunt into Jeremiah’s.

Jeremiah was “born probably after 650 BCE [in] Anathoth, Judah,” a village located a few miles from Jerusalem. (Britannica). He would live approximately eighty years, dying in around “570 BCE [in] Egypt.” Id. Jeremiah’s historicity is virtually unquestioned. (See Britannica, for example).

“According to the biblical Book of Jeremiah, he began his prophetic career in 627/626—the 13th year of King Josiah’s reign [of Israel].” Id. At the time, Israel is believed to have been a subject-state of Assyria, then a world power. Shortly after beginning his ministry, Jeremiah told his fellow Jews, “This is what the LORD says: ‘When seventy years are completed for Babylon, I will come to you and fulfill my good promise to bring you back to this place.’” (Jer. 29:10). He also said this:

This whole country will become a desolate wasteland, and these nations will serve the king of Babylon seventy years.

“But when the seventy years are fulfilled, I will punish the king of Babylon and his nation, the land of the Babylonians, for their guilt,” declares the Lord, “and will make it desolate forever.”

(Jer. 25:11-12). Jeremiah’s writings are believed to have been completed by somewhere between 605 and 580 B.C., and obviously before his death in 570 B.C.

Alright, circling back to Daniel’s story. “[On] March 16, 597,” Babylon, under Nebuchadnezzar II, “captured Jerusalem [and] deport[ed] King Jehoiachin [of Israel] to Babylon.” (Britannica). “The siege of Jerusalem ended in its capture in 587/586 and in the deportation of prominent citizens, with a further deportation in 582.” Id. Daniel was among the deportees over the course of Nebuchadnezzar’s multiple sieges of Israel, which may have begun as early as 605 B.C. when Nebuchadnezzar came to power.

This extra-biblical information comports with what we read in Daniel. After coming to power, Nebuchadnezzar “ordered Ashpenaz, chief of his court officials, to bring into the king’s service some of the Israelites from the royal family and the nobility—young men without any physical defect, handsome, showing aptitude for every kind of learning.” (Dan. 1:3-4). “Among those who were chosen were some from Judah: Daniel, Hananiah, Mishael and Azariah.” (Dan. 1:6).

Relatively quickly, after interpreting a dream for Nebuchadnezzar, “the king placed Daniel in a high position.” (Dan. 2:48). In fact, Daniel would later be “proclaimed the third highest ruler in the kingdom.” (Dan. 5:29). “Daniel remained there [in Babylon] until the first year of King Cyrus.” (Dan. 1:21). This event occurred in 539 B.C. when “the legendary Persian king Cyrus the Great conquered Babylon.” (The History Channel). Thus, Daniel lived most of his life as an exile in Babylon.

Daniel chapter 9, which is believed to reflect events occurring in 540-539 B.C., opens with Daniel realizing “according to the word of the Lord given to Jeremiah the prophet, that the desolation of Jerusalem would last seventy years.” (Dan. 9:2).

In great likelihood, Daniel would have known that Assyria—to which Israel had previously been subject—fell in 609 B.C. (Britannica). In fact, it was then when “the Assyrian empire collapsed under the assault of Babylonians from southern Mesopotamia and Medes, newcomers who were to establish a kingdom in Iran.” (The Metropolitan Museum of Art). Thus, Babylon likely would have assumed at least some control over Israel when it took Assyria in 609 B.C. According to the Jewish Virtual Library, “[w]hen the Babylonians defeated the Egyptians in 605 BC, then Judah [officially] became a tribute state to Babylon.” Daniel would have known this, too.

Again fast-forward to 540-539 B.C. when the events in Daniel chapter 9 are believed to have occurred. Upon realizing that the seventy-year period of captivity prophesied by Jeremiah was just about up, Daniel “turned to the Lord God and pleaded with him in prayer and petition, in fasting, and in sackcloth in ashes.” Daniel’s ensuing prayer lasts sixteen verses. He is asking for freedom for his people.

Remarkably, the next year, “[i]n 538 BCE[,] King Cyrus made a public declaration granting the Jews the right to return to Judah and rebuild the Temple in Jerusalem.” (Israel Ministry of Foreign Affairs). It likely would have taken the Jews several months to receive news of King Cyrus’s decree and an even longer period of time to make the lengthy journey from modern day Iran to Jerusalem, a distance of close to 1,800 kilometers. Thus, Jeremiah’s seventy-year prophecy appears to have been fulfilled—perhaps on multiple levels.

All this sets the stage for the last few prophetic verses of Daniel chapter 9, when Daniel receives a word from the angel Gabriel “[w]hile [he] was speaking and praying, confessing [his] sin and the sin of [his] people Israel and making [his] request to the Lord [his] God for his holy hill.” (Dan. 9:20). Rather than address Daniel’s concern over the seventy-year captivity and the looming end thereof, Gabriel addresses another thing altogether. I will only quote verse 25 and part of 26 (Gabriel addressing Daniel):

Know and understand this: From the time the word goes out to restore and rebuild Jerusalem until the Anointed One, the ruler, comes, there will be seven ‘sevens,’ and sixty-two ‘sevens.’ It will be rebuilt with streets and a trench, but in times of trouble. After the sixty-two ‘sevens,’ the Anointed One will be put to death and will have nothing.

The word “sevens” is misleading to our English ears. The original text used the word “שְׁבוּעַ”or “shabua,” which is defined as a “a period of seven (days, years), heptad, week.” “Shabua” is also used, for example, in Genesis 29:27 where Laban tells his nephew Jacob that he will need to work for him for “another seven years” in order to marry his daughter Rachel, after Laban had swindled him into first marrying his eldest daughter, Leah.

Thus, “seven ‘sevens,’ and sixty-two ‘sevens’” (Dan. 9:25) is thought to refer to sixty-nine seven-year periods. This calculates to 483 years. In other words, “From the time the word goes out to restore and rebuild Jerusalem until the Anointed One, the ruler comes, there will be [483 years].”

According to many scholars, the term “Anointed One” refers to a messiah figure. But what does “the word [that] goes out to restore and rebuild Jerusalem” refer to? What event might this be?

According to the Jewish Virtual Library, described by one college as the “the most comprehensive online Jewish encyclopedia in the world” and by a publication of the American Library Association as “a living encyclopedia [more] than it is anything else,” says this:

In the 20th year of the Persian king Artaxerxes I (445), a delegation of Jews arrived from Jerusalem at Susa, the king’s winter residence, and informed Nehemiah of the deteriorating conditions back in Judah. The walls of Jerusalem were in a precarious state and repairs could not be undertaken (since they were specifically forbidden by an earlier decree of the same Artaxerxes (Ezra 4:21)). The news about Jerusalem upset Nehemiah, and he sought and was granted permission from the king to go to Jerusalem as governor and rebuild the city.

See also Nehemiah 2:1-8, where it states that it “pleased the king [Artaxerxes I] to send [Nehemiah] . . . to the city in Judah where [Nehemiah’s] ancestors are buried so that [he] can rebuild it,” and, further, that the king then issued letters to governors and other officials memorializing his decree.

Now, 483 years from 445 B.C. equates to approximately 39 B.C., when Daniel prophesied the “Anointed One” would come. Jesus, however, is widely considered to have been born in 6-4 B.C. and to have died in the 30-33 A.D. timeframe. (See Britannica, for example, estimating it to be at 30 A.D.).

However, when factoring in that “[a]ncient calendars around the world initially used a 360 day calendar,” you end up with approximately 476 years (rather than 483) when calculating the sixty-nine seven-year periods. With this in mind, 476 years from 445 B.C. is approximately 32 A.D., which is the year in which many scholars think Jesus in fact died.

But why assume that the prophecy should calculate to his death rather to his birth? Is not his birth when he “came,” to use the word Daniel uses?

Remember, the prophecy says sixty-nine sevens “until the Anointed One, the ruler, comes” and “after [the sixty-nine sevens the Anointed One] will be put to death and will have nothing.” (Dan. 9:26). To use an example, we would say that a president or prime minister “comes” or arrives when he is actually recognized as a political leader. Thus, the question as to the date of the Anointed One’s coming should be, “When was Jesus publicly recognized as a savior, king, or anything of the sort?”

As commemorated by Palm Sunday, just a few days before Jesus died, he was recorded to have had his triumphal entry into Jerusalem. Below is the version as set forth in John 12:12-16 (the event is also recorded in Matthew, Mark, and Luke):

The next day the great crowd that had come for the festival heard that Jesus was on his way to Jerusalem. 13 They took palm branches and went out to meet him, shouting,

“Hosanna!”

“Blessed is he who comes in the name of the Lord!”

“Blessed is the king of Israel!”

Jesus found a young donkey and sat on it, as it is written:

“Do not be afraid, Daughter Zion;
    see, your king is coming,
    seated on a donkey’s colt.”

The other gospels provide additional details, including this in Luke 19:39: Upon witnessing his triumphal entry, “Some of the Pharisees in the crowd said to Jesus, ‘Teacher, rebuke your disciples!’” Why did the Pharisees do this? As stated by some scholars, they realized the crowd was singing Psalm 118:26 to Jesus, which was considered to be a messianic psalm.

According to Sir Robert Anderson, as set forth in his book The Coming Prince, Jesus’s triumphal entry into Jerusalem was exactly sixty-nine seven-year periods to the day from when Artaxerxes I gave his decree to Nehemiah chapter 2.

Obviously, scholars quibble about these things and arrive at vastly different conclusions. But “virtually all scholars . . . accept that a human Jesus existed” and an apparent widespread consensus among them is that he died around 30 A.D. or shortly thereafter. And the timing of the events outlined above, even when extracted entirely from extra-biblical sources, is too eerie to dismiss the prophecy of Daniel outright.

Of course, anyone can find an “expert” to tell them otherwise. As a lawyer, I know this better than most. By saying this, I do not mean to suggest that anyone with a true intellectual curiosity (rather than a pretextual one) should not engage with scholars and academics. By all means do so. But do not place all your hope in them. On that note, here is an excerpt from G.K. Chesterton’s “The Twelve Men: An incomparable explanation of juries”:

Our civilisation has decided, and very justly decided, that determining the guilt or innocence of men is a thing too important to be trusted to trained men [i.e., experts]. . . .When it wants a library catalogued, or the solar system discovered, or any trifle of that kind it uses up its specialists. But when it wishes anything done which is really serious, it collects twelve of the ordinary men standing round. The same thing was done, if I remember right, by the Founder of Christianity.

Below is a charming, short clip from The Chronicles of Narnia: The Lion, the Witch and the Wardrobe in which the prophecy of Aslan is made known to Peter, Susan, Edmund, and Lucy by a couple of enchanting beavers.

The Long March Through (or Into) the False Self

Prologue (who includes a prologue in a blog post?!)

The title of this post is a play off of the phrase the “Long March Through the Institutions,” which is often credited to Antonio Gramsci, an Italian Marxist from the early 20th century. Gramsci was referencing the conditions necessary for a societal revolution in the West—namely, by co-opting various governmental and cultural institutions like school systems, corporations, media, and the academy (e.g., Hollywood). Gramsci believed that traditional Marxism had failed to lead to revolution in the West because capitalism was too deeply embedded in western culture, even among the proletariat (i.e., the working class). Thus, a long march through western culture’s institutions must ensue to uproot capitalism and effect revolution. Out of this idea, the term “cultural Marxism” was born.

While I believe a calamitous march of this sort is underway in western society today, the purpose of this blog post is not to further explain or address that. Rather, it is to draw parallels between this concept and that of the journey necessary to combat our own deeply embedded individual corruption and propensity to maintain and present a false self.

The false self is a term apparently born out of psychoanalysis in the 1960s. In short, it is a defensive facade. Virtually all of us have it to one degree or another. Richard Rohr, a progressive Franciscan priest and popular writer on spirituality, describes it like this:

[The false self] is a set of agreements between you and your parents, your family, your school chums, your partner or spouse, your culture, and your religion. It is your “container.” It is largely defined in distinction from others, precisely as your separate and unique self. It is probably necessary to get started, but it becomes problematic when you stop there and spend the rest of your life promoting and protecting it.

To be clear, I do not endorse Rohr or his works. Nonetheless, I believe his definition is apt. For many of us, the false self is deeply entrenched and will not easily uproot. Instead, we must make the long march through the institutions—our very own, i.e., the aggregate of our personal psychological and spiritual characteristics. However, we can’t do this, at least not well, without also undertaking a search for God. For, as stated in the first chapter of John Calvin’s Institutes:

Without knowledge of self, there is no knowledge of God. Our wisdom, insofar as it ought to be deemed true and solid wisdom, consists almost entirely of two parts: the knowledge of God and of ourselves. But as these are connected by many ties, it is not easy to determine which of the two precedes and gives birth to the other.

Blog Post

In 2015, a British-Swiss writer and journalist by the name of Johann Hari gave a now famous TED Talk titled, “Everything you think you know about addiction is wrong.” Easily, the most poignant line from the entire talk was the very last one: “[T]he opposite of addiction is not sobriety. The opposite of addiction is connection.”[1] Ooh, that’s deep.

Throughout the talk, Hari comes across as thoughtful and sensitive, but emotionally grounded (as could be said of most TED Talk speakers, whose talks all seem to share a similar aura, like that of Serial podcasts and their progeny, no?). However, as Hari closes his talk with the line quoted above, I thought I sensed painful emotion begin to surface and his eyes begin to well. It’s subtle, but real. (You can watch or evaluate for yourself here at 14:15). Perhaps I was projecting my own experience onto him.

I was moved to emotion in part because I was reminded of an even more poignant quote—this one by NYC pastor Tim Keller from his book “The Meaning of Marriage”:

To be loved but not known is comforting but superficial. To be known and not loved is our greatest fear. But to be fully known and truly loved is, well, a lot like being loved by God. It is what we need more than anything. It liberates us from pretense, humbles us out of our self-righteousness, and fortifies us for any difficulty life can throw at us.

If you were not previously familiar with this quote, it is worth rereading. In fact, let us do that together as I attempt to expound on it, particular as it relates to our culture at large.

To be loved but not known . . . I believe this phrase encapsulates our culture. According to a 2021 poll by the Survey Center for American Life, for example, 49% of Americans report having between just zero and three close friends. In 1990, that figure was 27%. Close to one in five Americans have zero close friends or just one. In summary, the poll revealed that “Americans are experiencing a crisis of friendlessness.”

While true friendship is on the decline, online “friends” and “followers,” and principles like affirmation and acceptance, are on the rise. Many Americans seem to now believe that validating someone’s inner experience as true, or at least as perfectly acceptable, is love. The obvious example is the LGBTQ community, where affirmation and acceptance are the chief hallmarks and seemingly know few limits.

By making this observation, I don’t mean to criticize the LGBTQ community’s penchant for acceptance without exception. Two Christian authors I very much like, for example—both of whom are gay or same-sex attracted yet have changed their lifestyles because of their Christian convictions—have credited it for its high level of hospitality. See Rosaria Butterfield’s “The Gospel Comes with a House Key” or David Bennett’s “A War of Loves: The Unexpected Story of a Gay Activist Discovering Jesus.”

Rather, I simply mean to highlight the increased emphasis and value our culture places on affirmation and acceptance. Recent examples include Facebook’s fifty-six gender options, terms like “pregnant people,” California’s attempted criminalization of non-preferred pronouns, and bans on “conversion therapy,” etc. Just ten or fifteen years ago, these things would have been anathema, even to most non-religious folks; however, they are now very much in the mainstream and even heralded by various religious groups.

I put the term “conversion therapy” in quotes above because today it is often defined to include “any attempt to change a person’s gender identity,” including that of a very young child. (Historically, the term was limited to efforts to change a person’s sexual orientation by using certain apparent pseudoscientific interventions.) Banning such “conversion therapy” is loving to the transgender person in the sense that it is uber-affirming of their view of self and purports to eliminate any and all threats to it. The question, however, is whether such bans actually love the transgender person, or rather affirm a falsehood they believe about themselves, thereby heavily fortifying their barrier to actually being known.

. . . is comforting but superficial.  Virtually all of us know what it feels like to be loved but not known. It’s when your stressed-out mother fed you ice cream or chips instead of attuning to your palpable emotional needs. It’s when you had a lot of popularity or name-recognition in high school simply because you were good-looking, athletic, smart, or funny. It’s when you gained hundreds of likes or even new followers on Twitter just because you announced you’re no longer straight or cisgender. These things are comforting, but superficial.

To be known and not loved is our greatest fear. I almost left the remainder of this paragraph blank for dramatic effect. When someone knows or sees us for who we truly are, including the bad and the ugly, it is not uncommon for us to experience fear, hide, blame, or worse. A classic example is the story of Adam and Eve, where upon eating the forbidden fruit, they were “afraid,” “hid themselves from the presence of the Lord God,” and then blamed their actions on others (Adam on Eve, Eve on the serpent). (Gn. 3:6-13). To be known by God and not loved was their greatest fear. Nonetheless, that chapter of their story concludes with God going in pursuit of them, alluding to Jesus’s future coming to make things right, and creating “garments of skins [for them] and cloth[ing] them.” (Gn. 3:8-9, 15, 21).

But to be fully known and truly loved is, well, a lot like being loved by God. This is a hard pill for many to swallow, me included at times. Is God really loving? This life is hard and incredibly brutal at times. As famous atheist Bertrand Russell ominously described it, “The Life of Man is a long march through the night, surrounded by invisible foes, tortured by weariness and pain, towards a goal that few can hope to reach, and where none may tarry long.” Russell was an atheist, so the end of his quote makes sense.

As Nietzsche keenly observed, “The gods justified human life by living it themselves — the only satisfactory response to the problem of suffering ever invented.” Ironically, Nietzsche hated Christianity, but his quote suggests Christianity is the only religion with a satisfactory answer. I assume the irony was lost on him, sadly.

What I mean by that is this. At the heart of the Christian message is the idea that “God showed his great love for us by sending Christ to die for us while we were still sinners.” (Rm. 5:8). And this: “He who did not spare his own Son, but gave him up for us all—how will he not also, along with him, graciously give us all things?” (Rm. 8:32). In other words, God owes us nothing, but offers us everything.

But if God is so loving, why the pain and hardship in this life, then? As many philosophers and social scientists have realized, love must be freely given and freely received. Peter Kreeft, a professor of philosophy at Boston College, puts it like this in his book “The God Who Loves You”:

Thus even our fall, our sin, is proof of God’s love. Only in freedom can we sin. And only love gives us the freedom to sin. Without that freedom to sin there is also no freedom to love…. We wish God had given us less freedom and had guaranteed that we would stay in Eden forever. We wish that He had put up a sign saying ‘no snakes in the grass,’ that he had given no law that we could ever have chosen to disobey. But that would not be father-love or mother-love, only smother-love. That would not be parenting but patronizing and pandering.

It is what we need more than anything. This claim is substantiated by a 2018 Pew Research Center study, consisting of multiple extensive surveys, titled “Where Americans Find Meaning in Life.” The results of the study can be summarized as follows:

Across both surveys, the most popular answer is clear and consistent: Americans are most likely to mention family when asked what makes life meaningful in the open-ended question, and they are most likely to report that they find “a great deal” of meaning in spending time with family in the closed-ended question.

Now, why would Americans list family as their number one source of meaning when our culture seems to celebrate individuality, career, wealth, prestige, and sexual autonomy and fulfillment more than family? I think it is because, deep down, we realize we want to be truly known, and yet still fully loved, more than anything else. And family (not without exception) is generally our best opportunity to experience that. Our parents, spouse, and children often know us better than anyone, and yet—despite our flaws, of which there are many—often still love us.

It liberates us from pretense, humbles us out of our self-righteousness, and fortifies us for any difficulty life can throw at us. This can be the beauty of being truly known and yet still fully loved: it liberates us from the shackles of our false self, relieves us of our inflated sense of our own goodness, and equips us to experience the freedom of self-forgetfulness.

But herein lies the rub: you need to know yourself and then make yourself known. Only you can do that. And to do that requires what may be agonizingly difficult honesty—with yourself, with others, and with God. This is part of the reason, I believe, Jesus said, “For whoever wants to save their life will lose it, but whoever loses their life for me will save it.” (Lk. 9:24). And this, “‘Even if [a brother or sister] sins against you seven times in a day and seven times come back to you saying ‘I repent,’ you must forgive them.’”

Why did Jesus seemingly make forgiveness conditional upon the offender “com[ing] back to you saying ‘I repent’”? Because without repentance, the false self is maintained. The true self remains in hiding. And anything in hiding cannot truly be dealt with.

Below is a short clip of Tim Keller discussing the modern self and the impossibility of finding your identity within yourself. It is only two minutes long, and it’s funny, too. Check it out.


[1] And Hari wasn’t merely talking about alcohol addiction, but rather “all sorts of addictions, whether it’s to [our] smartphone or to shopping or to eating.”

Tech, Trans, & the Throes of Teenage Girls

Advances with respect to both technology and LGBTQ rights have been massive in the 21st century. Arguably, more ground has been covered in twenty-one years than in the prior twenty centuries combined. Let’s do a quick recap.

As to LGBTQ rights, at the turn of the 21st century not a single country in the entire world had legalized same-sex marriage. Nor had a single U.S. state. A mere twelve states banned sexual orientation discrimination in private employment. Not a single state banned gender identity discrimination in private employment. Criminalization of gay and lesbian sex was still constitutional. Approximately 54% of Americans thought “sexual relations between two adults of the same sex” (let alone marriage) were “always wrong.” Many others thought it was “almost always” or “sometimes” wrong. And a host of LGTBQ-related words (e.g., cisgender, genderfluid, genderqueer) were years and years away from being added to the dictionaries.

As to technology, the apocalyptical Y2K bug was threatening to shut down our dial-up internet and other centers of now outdated technology. We lacked camera phones, USB flash drives, Bluetooth, social media, YouTube, bitcoin, voice assistants, 3D printing, iPhones, and a myriad of other things. The first-ever mass-produced electric vehicle (the Honda Insight) had just been released. And your average American sent a whopping thirty-five text messages per month, most likely on a Nokia 3210. Yes, per month.

Notably, the advancement of technology, unlike that of LGBTQ rights, has not been very politically polarizing—big tech regulation efforts notwithstanding. Both, however, have had a disproportionate impact on teenage girls.

With respect to technology, this assertion is pretty noncontroversial. The political left, right, and center have actually come together on this issue, or at least reached similar conclusions. For example, the NYT, WSJ, WaPo, and Breitbart all recently published articles condemning the effects of Instagram on teenage girls. Needless to say, rarely do these four news outlets collectively agree on anything.

Texting, likewise, is particularly problematic for teenage girls. Studies show that teenage girls generally text more than boys; are more prone to compulsive texting; and, apparently unlike boys, experience a “negative relation between compulsive texting and academic functioning.”

The problems appear to go beyond Instagram and texting to the smartphone at large. In a 2020 article by Etactics titled “40+ Frightening Social Media and Mental Health Statistics,” author Maria Clark writes:

Since the release of smartphones, mental health concerns have increased in children and young adults. The rate of adolescents reporting symptoms of major depression in a given year increased by 52% from 2005 to 2017. From 2009 to 2017, it grew by 63% in adults ages 18 to 25.
. . . .
Between 2012 and 2015, depression in boys increased by 21% and in girls by 50%.
. . . .
Child suicide rates increased by up to 150%, and self-harm by girls ages 10 to 14 nearly tripled. These patterns point to social media.

The tech giant execs know this better than anyone. As NYT contributor Lindsay Crouse states in the closing paragraph of her recent article, “For Teenage Girls, Instagram Is a Cesspool, “more telling than what Silicon Valley parents say [about tech] is what they do. Many of them have long known that technology can be harmful: That’s why they’ve often banned their own children from using it.”

In light of the above, it is not hard to understand why Americans from both sides of the political aisle share common ground on this subject.

With respect to the advancement of LGBTQ rights, recognizing its disproportionate impact on teenage girls should be noncontroversial, too—at least if construed matter-of-factly. As recently reported by WebMD in its article titled “Big Rise in U.S. Teens Identifying as Gay, Bisexual,” between 2015 and 2019 the percentage of fifteen to seventeen-year-old boys identifying as non-heterosexual jumped by 26% from 4.5% to 5.7%, while the number of girls identifying as non-heterosexual jumped by 46% from 12.2% to 17.8%. According to another study, 23% of black women ages eighteen to thirty-four now identify as bisexual. By contrast, as of 2010, one in sixty-five of all women identified as bisexual, i.e. 1.5%.

The statistics are even more glaring as it relates specifically to transgenderism. According to the most recent Diagnostic and Statistical Manual of Mental Disorders (DSM-5), which was published in 2013, less than one in 10,000 people experienced gender dysphoria at that time. More specifically, it occurred in .005-.014% of males and an .002-.003% of females (i.e., approximately one in every 30,000-50,000 females). In other words, as of 2013, males accounted for anywhere between 62-82% of all cases of gender dysphoria. Most cases involved young boys; gender dysphoria in adolescent females was extremely rare.

Since then, “adolescent gender dysphoria has surged across the west. In the United States, the prevalence has increased by over 1,000 percent.” Abigail Shrier, Irreversible Damage: The Transgender Craze Seducing Our Daughters (2021). In fact, per a 2017 CDC study, 2% of high school students now identify as transgender. A 2018 study conducted by then Brown University professor Lisa Littman, MD, MPH, FACOG found that 80% of the adolescents with gender dysphoria she studied were females, with the mean age being 16.4. For the vast majority of these girls, there was not a single indicator of gender dysphoria in their childhood. Rather, the condition “seemed to occur in the context of belonging to a peer group.” The results of her study led Dr. Littman to coin the term “rapid-onset gender dysphoria” and to describe it as mostly a “social contagion.” This phenomenon is not limited to the U.S. “In Britain, the increase [in adolescent gender dysphoria] is 4,000 percent, and the three-quarters of those referred for gender treatment are girls.” (Shrier).

We are not simply referring to social transitioning either. Medical and surgical transitioning has increased by leaps and bounds, too. From 2016 to 2017 alone, there was a 400% rise in transgender surgeries in the U.S., with women accounting for seven in ten. (Shier, citing to the 2017 Plastic Surgery Statistics Report). From 2008 to 2018, the UK reported a 4,400% increase in transgender surgeries among teenage girls. At the time Shrier was writing her book in 2019 or 2020, GoFundMe was hosting “over thirty thousand fundraisers to enable young women to remove their healthy breasts.” (Shrier). Apparently, as of this moment, there are 40,330 (including both sexes).

Considering the above, we must ask whether the greatly increased prevalence of transgenderism among teenage girls is negatively impacting them, or is it merely impacting them neutrally or for the better? Obviously, this is a very polarizing question in our culture. But no one can reasonably deny that it is negatively impacting teenage girls in this sense: gender dysphoria by its very nature is distress, often of an extreme intensity. (See, for example, the Child Mind Institute’s Quick Guide to Gender Dysphoria, stating that “[t]he key sign of gender dysphoria is that the child feels extreme emotional distress because of their gender identity.”) Thus, if gender dysphoria is increasing exponentially among teenage girls, they are being negatively impacted by it in the sense that they are experiencing more distress. The counterarguments would be, “Well, it is the culture’s fault for not being more affirming” and/or the distress is a necessary stepping stone to true happiness that occurs once the transition is settled into or completed.

The primary counterargument, however, may lie in suicide rates. That is, many mental health professionals and educators tell parents that if they do not accept and affirm their child’s own sense of gender, their child will be more likely to commit suicide: “Would you rather have a dead daughter or a live son?” they rhetorically ask.

Lamentably, it is indisputable that transgender people (and particularly transgender youth) have very high rates of suicidal ideation. According to the 2015 U.S. Transgender Survey (USTS), “the largest survey of transgender people in the U.S. to date,” 82% of transgender people have seriously considered suicide in their lifetimes, 48% have done so in the last year, and 40% have attempted suicide at some point in their lives. By contrast, only 4.6% of the general population has attempted suicide at some point in their lives. The disparity is incredibly sad. The pain is infinitely real.

The statistics are even more disturbing for teenage girls who identify as transgender. As reported by the Human Rights Campaign, 2018 study that “more than half of transgender male teens [i.e., biological girls] . . . attempt[ed] suicide in their lifetime,” with approximately 70% experiencing suicidal ideation in the last year.

As suggested above, proponents of transgender ideology would blame these statistics on societal stigma, discrimination, and deprivation of human rights. But these things have decreased significantly in recent years. For example, our executive branch, mainstream media, and U.S. Supreme Court (or at least six of its justices), among other institutions, arguably stand in solidarity with the transgender community.

Moreover, according to the Transgender Law Center’s Equality Map, “45% of the LGBTQ population lives in states with high policy tallies,” i.e., “laws and policies within the state that help drive equality for LGBTQ people.” Only “11% of the LGBTQ population lives in states with negative overall policy tallies.” Yet, despite these things, the transgender community’s mental health statistics do not appear to be getting better and may in fact be getting worse. (See the Human Rights Campaign’s above-linked article stating that the mental health statistics are “harrowing” and “alarming”).

From my perspective, these statistics beg the question as to whether there is a major cultural gaslighting underway. What I mean by that is this. On the one hand, we are increasingly promoting transgenderism in K-12 public education, among other places—including the purported distinctions between gender identity, gender expression, sex assigned at birth, physical attraction, and emotional attraction as represented by the Gender Unicorn or the Genderbread Person. As a result, 15.1% of Gen Z and 9.1% of Millennials now identify as LGBTQ, as compared to 3.8% of Gen X and 2% of Baby Boomers.

Study after study, however, shows that those who identify as transgender experience significantly worse mental health than the general population (the same goes for the rest of the LGBTQ community, albeit to a lesser degree). But, as demonstrated above, we are promoting, celebrating, and driving people to entertain transgenderism. Thereafter, if the gender-confused or gender-dysphoric person expresses reservation or ambivalence, we tell them that their mental health will worsen unless they fully embrace a transgender identity and “accept themselves.”

However, “[w]hen you tell a group of highly suggestible adolescent[s] that if they don’t [do] a certain thing, they’re going to feel suicidal, that’s suggestion, and then you’re actually spreading suicide contagion.” (Lisa Marchiano, Psychotherapist, Certified Jungian Analyst). Fearing for their own lives (at least in some instances), the gender-confused or gender-dysphoric person will then enter the trans-community only to subsequently become one of the 40-50%+ that experiences suicidal ideation. I am not suggesting that this is the order of events for all transgender people who experience suicidal ideation, but it is the order for some, perhaps many. Hence, what appears to be gaslighting.

We now live in a country where “[j]ust 45% of Gen Zers report their mental health is very good or excellent” (significantly lower than any other generation). Moreover, 70% of teens say anxiety and depression are a “major problem” among their peers according to a 2019 Pew Research Center study. Obviously, there are other factors at play, but I believe it is unreasonable to deny that tech and trans-activism are two contributing factors—perhaps the two foremost.

I can’t help but think to C.S. Lewis’s comments on progress in his book “The Case for Christianity.” He writes:

We all want progress. But progress means getting nearer to the place where you want to be. And if you have taken a wrong turn, then to go forward does not get you any nearer. If you are on the wrong road, progress means doing an about-turn and walking back to the right road; and in that case the man who turns back soonest is the most progressive man.

For the thousands of trans teens who have detransitioned or want to, I imagine this quote would resonate with them greatly.

In closing, as compassionately stated by Christian author Sam Alberry, who has been attracted to men since he was an adolescent, Christianity can “uniquely account for how it is that someone could end up feeling so un-at-home in their own body.” (He explains why in the short video below). For this reason, Alberry notes, Christians should be “the most compassionate and understanding people there are when it comes to this issue.” He then identifies how Christianity offers a unique hope. For more, watch his two minute, and forty-five second below.

The Unparalleled DEI of Christianity

Like many conservatives, I do not think well of the phrase “diversity, equity and inclusion” (commonly referred to as DEI). To be clear, I do not think ill of the terms in and of themselves—unshackled from their 21st century socio-political definitions. To the contrary, I embrace them. As to diversity, for example, I currently attend a very racially diverse church that is a true melting pot and no more than 15-20% white. I attend this church because of the diversity, not in spite of it. I see it as objectively beautiful. As to equity, a top strength of mine per the popular StrengthFinders test is “consistency,” which the results say means that “you are keenly aware of the need to treat people the same, no matter what their station in life, so you do not want to see the scales tipped too far in any one person’s favor. In your view, . . . [i]t leads to a world where some people gain an unfair advantage because of their connections or their background or their greasing of the wheels. This is truly offensive to you.” This definition very much resonates with me and my values. As to inclusion, at parties or in group settings, my eyes and heart often wander to the outsider—I want to make sure they feel noticed and included. Obviously, I do not hold to these values perfectly, but they are very important to me.

So why do I dislike the phrase DEI when it sounds as if I ought to be on the frontlines of championing it? Well, for one, the DEI movement does not like diversity or inclusion when it comes to conservatives or Christians. As admitted by popular NYT columnist Nicholas Kristof in his article titled “A Confession of Liberal Intolerance,” “We progressives believe in diversity, and we want women, blacks, Latinos, gays and Muslims at the table — er, so long as they aren’t conservatives.” He then writes, “the one kind of diversity that [progressives] disregard is ideological and religious. We’re fine with people who don’t look like us, as long as they think like us.”

In his article, Kristof quotes black evangelical sociology professor George Yancey: “Outside of academia I faced more problems as a black. But inside academia, I face more problems as a Christian, and it is not even close.” (Emphasis added). A few paragraphs later, Kristof quotes another black evangelical professor, Jonathan L. Walton, then of Harvard (now of Wake Forest): “Of course there are biases against evangelicals on campuses. The same arguments I hear people make about evangelicals sound so familiar to the ways people often describe folk of color, i.e. politically unsophisticated, lacking education, angry, bitter, emotional, poor.” To his credit, Kristoff highlights (and criticizes) how the problem is particularly bad in academia, where one study showed there are less Republicans among professors (6-11%) than there are self-described Marxists (18%).

In short, the DEI movement is hypocritical as it relates to diversity and inclusion. Admirably, it loves diversity of race (which it admits is a social construct, i.e., skin-deep), yet it does not like diversity of belief or thought (let alone of speech).

Now, to be fair, I tend to think western white Christians are reaping what they have sown on some level. For centuries, much of western white Christianity talked a great game (e.g., Declaration of Independence) but engaged in, or sometimes even spearheaded, horrific practices like slavery and Jim Crow. Thus, DEI proponents may argue that white conservative Christians are finally getting their comeuppance, their just desserts. “Hey, you dished out slavery and Jim Crow for two centuries; you can put up with a little bit of social ostracism, you know. Remember, you reap what you sow. ;)” Shall we say “amen” or “touché” to that, or recognize that “we become what we hate” (apparently, an old yoga maxim). I say the latter.

Okay, so that provides a general overview of my dislike for the D and I, but what about the E? I just typed the terms “what is equity social justice” into my Google search bar and received the following answer atop the search results (per United Way): “Equity, in its simplest terms as it relates to racial and social justice, means . . . allocating resources and opportunities as needed to create equal outcomes for all community members.” (Emphasis added). Whoa. It is definitions like these that generate accusations of cultural (or actual) Marxism.

Now, other search results offered alternative definitions, and undoubtedly there are many folks who support DEI and would disagree with the above-quoted definition. Like me, they may think that the above-quoted definition describes something that is actually inequitable. This is because fairness often requires unequal outcomes (take sporting events, for an obvious example). But we can’t deny that an increasing number of Americans advocate for equality of outcome, including very influential Americans. Case in point, our Vice President Kamala Harris tweeted a video two days before the 2020 presidential election in which she said, “Equality suggests, ‘Oh everyone should get the same amount.’ . . . It’s about giving people the resources and the support they need so that everyone can be on equal footing. Equitable treatment means we all end up at the same place.” This is arguably more extreme than the United Way definition. If American universities applied Vice President Harris’s logic, they would hire just as many Republicans as Democrats/Progressives so that an equal number would “all end up at the same place.”

Thus far, the content of this blog post hasn’t exactly corresponded with its title. But to that point we now turn. Does Christianity advance DEI, or is it DOA (dead on arrival) when held up to the DEI standard? I will now argue that Christianity promotes DEI, in the truest sense.

We will begin with diversity. In the west, Christianity is increasingly labeled a colonialistic, patriarchal white man’s religion. In reality, this is far from the truth. “When you mock Christians, you’re not mocking who you think you are,” says Yale law professor and leading black public intellectual Stephen Carter.

What is Professor Carter talking about? According to Pew Research Center’s research, roughly 80% of both African Americans and Latino Americans identify as Christian, whereas just 70% of white Americans do. Globally, the stats are more supportive of Professor Carter’s point: worldwide, 26% of Christians reside in Sub-Sharan Africa, 25% in Latin America, and 13% in Asia-Pacific. Less than 35% reside in Western Europe and North America. By 2060, it is expected that close to 80% of all Christians will reside outside of North America and Western Europe.

Now compare Christianity’s racial diversity to that of Islam, Buddhism, Hinduism, Judaism, or the Nones (a new term referring to the religiously unaffiliated). Christianity is exceedingly more racially diverse. It is not even close—it’s a landslide. This is a natural outcome of Christianity’s teaching. Jesus’s final words to his disciples were to “go make disciples of all nations”; in the original language, the last word is “ἔθνος” (from which we derive the term ethnos). As author Rebecca McLaughlin (PhD, Cambridge) notes:

Contrary to popular conceptions, the Christian movement was multicultural and multiethnic from the outset. Jesus scandalized his fellow Jews by tearing through racial and cultural boundaries. For instance, his famous parable of the good Samaritan was shocking to its first hearing because it casts a Samaritan—a member of a hated ethno-religious group—as a moral example. Today’s equivalent would be telling a white Christian who had been raised with unbiblical, racist assumptions a story in which the hero was a black Muslim.

For a more thorough exploration of this subject, read McLaughlin’s book Confronting Christianity and specifically chapter 2, titled “Doesn’t Christianity Crush Diversity?” Needless to say, the answer is no.

Next, we turn to equity. No one can dispute that Jesus’s treatment of women, lepers, tax collectors, and other second-class citizens or social outcasts was revolutionary. As outlined by non-Christian historian Tom Holland (a graduate of both Oxford and Cambridge) in his recent book “Dominion: How the Christian Revolution Remade the World,” many of our western, secular human rights can actually be traced to Christianity, even if they are not all commensurate with it. In reviewing Holland’s book, Cambridge Professor James Orr notes, “this remarkable book does convince us that the moral grammar of self-consciously secular progressives would be unintelligible in a world in which Jesus of Nazareth had never existed.”

Now, if you want equity in the sense advocated for by the likes of United Way and Vice President Harris (i.e., equal outcomes), the Bible offers you something, too. Just read Jesus’s Parable of the Vineyard Workers in which a landowner (representative of God) gives equal pay to all his workers, regardless of whether they started work in the morning or simply “worked only one hour.” The workers who had labored all day “began to grumble against the landowner” because he unfairly “made [the other workers] equal to [them].” (Mt. 20:11-12). (This may beg the question, “Doesn’t this lend itself to some form of socialism?” Well, the landowner (God) defended himself on the basis that, “Don’t I have the right to do what I want with my own money. Or are you envious because I am generous.” (Mt. 20:15). With socialism and other similar forms of government, the government’s money is arguably not its own, nor is a government inherently good, unlike God.)

Finally, we turn to inclusion. Arguably, Christianity is the most inclusive religion, not just in terms of diversity, but in terms of what theologians call soteriology (the means to salvation). Muslims, for example, believe that you must earn your way to salvation by repeatedly practicing the five pillars, but that salvation can never be assured, as Mohammed himself noted. Hindus believe you must eliminate all evil in your life. Buddhists believe you must eliminate all desire in your life. Some refer to wokeism as a religion, with its “idea of original sin (being born white and/or male), rituals (including self-flagellation), symbols, heretics (hello, JK Rowling), and de facto priests and prophets,” like Ibram X. Kendi and Robin DiAngelo. I do not know if that is entirely fair, but there is some truth there and there is arguably a lack of grace in the movement, too. Christianity, meanwhile, says “it is by grace you have been saved, through faith” in Jesus and not by your own efforts or works. (Eph. 2:8-9). Christianity says come as you are. I think to the opening lyrics from Lecrae’s song “Take Me As I Am”:

Christ through faith
I talked to a cat the other day
And he was like;
“Man I really wanna come to Christ
But I gotta clean my life up first, get my sins together”
I told em, I used to think that way too
I thought I had to change myself before I could come to Christ
But Christ changed me
Let me tell you my story, it starts like this

Below is a video featuring a discussion between English New Testament scholar NT Wright and non-Christian British author and political commentator Douglas Murray about forgiveness in the current cultural moment vs. Christian forgiveness. It’s only seven minutes long. Check it out.