There’s nothing necessarily wrong with diversity as a social or organizational goal — bringing together different outlooks prevents groupthink and facilitates creativity — but our modern fixation on celebrating different ethnicities and cultural backgrounds is a shift from the conception of diversity that’s part of this country’s birthright.

Indeed, one of the distinctive, even unique, characteristics and achievements of the early American republic was that it stood for religious diversity — and came into being without state sanction of any particular religion. That the Puritans of New England, the Quakers of Pennsylvania, and the Catholics of Maryland would coalesce into a single political entity with no denomination holding sovereign sway would have been unheard-of on the other side of the Atlantic, where religious wars had ravaged Britain and the European continent for centuries. In the words of Yale historian Jon Butler:

Through their daily interactions, these Americans created a living foundation for the First Amendment. After Independence their active diversity of faiths led Americans to the groundbreaking idea that government should abandon the use of law to support any religious group and should instead guarantee free exercise of religion for everyone.

The Founders valued religious diversity as a measure of individual liberty over state oppression as well as a way of keeping the peace among different peoples, both in opposition to the European example.

This principle thankfully extended to Jews. George Washington famously asked, or prayed, one might say, that “the Children of the Stock of Abraham, who dwell in this land, continue to merit and enjoy the good will of the other Inhabitants; while every one shall sit in safety under his own vine and figtree, and there shall be none to make him afraid.”

That original ethos explains why our public officials have long insisted that “diversity is our strength.” In a speech in 1948 on the merits of free-thinking democracy over the ideological uniformity of Communism, Justice William O. Douglas stressed that Americans’ “tastes in art, literature, and philosophy, our religious creeds, our political faiths, and our economic theories, have differences as great as those that mark the Great Plains from the crags of the Tetons,” and that such wide-ranging diversity “is our strength.” But that diversity was celebrated against the background of certain core principles that Americans held in common. As Richard Nixon remarked while campaigning in 1960, “Diversity is our strength, but it is a strength only if we are united in the face of those who are opponents of all religion, of all idealism, of all freedom.”

We can draw three broad lessons from those early invocations. First, until fairly recently in America, “diversity” referred to a variety of ideas, beliefs, and opinions — not races or ethnicities as such. Second, diversity isn’t an inherent good, but one conditioned on having some background unifying purpose, much as a basketball team has players of different specialties (scoring, rebounding, defense), or, as America’s official motto puts it: e pluribus unum (out of many, one). Third, and perhaps most important for understanding our current discourse, America’s founding legal expression of diversity, the First Amendment, facilitates and strengthens differing creeds and beliefs rather than imposing any particular ideology, religious or secular.

By contrast, diversity today almost always connotes a set of values at odds with these lessons: diversity along lines of race, gender, or sexual orientation; the privileging of certain groups’ advancement at the expense of the broader populace; and a commitment to identity-based metrics of social engineering. Diversity has been politically redefined in such a way that crowds out other core classical liberal — and American and Jewish — values such as liberty, equality, and virtue. When laws and norms changed to break down barriers based on immutable characteristics, it brought about an organic and rich diversity that actually did strengthen America. By contrast, today’s operationalization of diversity takes us away from Martin Luther King Jr.’s famous dream of a nation where one’s character is more important than one’s skin color.

This idea of elevating difference — superficial “United Colors of Benetton” difference at that — above other goals is inextricably linked to the idea of remedying past oppression. Many believe that we must actively encourage this specific brand of diversity because past discrimination excluded certain ethnic and racial groups from America’s original diversity ethos. Past oppression manifests today as “systemic racism,” the argument goes, so we must inculcate diversity through government, corporate human resources, university offices, and otherwise.

At a time when institutions are scaling back and recalibrating their diversity efforts — as a result of cultural retrenchment, political developments, and legal rulings — it’s important to explore how the diversity, equity, and inclusion triumvirate emerged (not in that order) and came to redefine our founding concept of diversity. This formalization of a “diversity-industrial complex” began with good intentions and became what it is through a Supreme Court quirk.


The original efforts to foster racial diversity in America focused specifically on colorblindness, not color preference. The idea was to apply merit-based standards while ensuring that historically underprivileged groups had equal opportunities for advancement. President John F. Kennedy’s Executive Order 10925, signed March 6, 1961, tasked the new Committee on Equal Employment Opportunity with requiring that federal contractors “take affirmative action to ensure that applicants are employed . . . without regard to their race, creed, color, or national origin.” As a result of universities’ close relationship with the federal government and as part of the drive to end educational segregation in the aftermath of Brown v. Board of Education, affirmative action quickly found its way into university admissions.

One can see here how “affirmative action” began with the principle of colorblindness, “without regard” for “race, creed, color, or national origin.” The assassination of King in 1968 sent the nation into a state of grief and anger and created a greater sense of urgency to implement affirmative action. Minority students demanded that universities make their student bodies more racially representative of the national population. Four weeks after King’s death, Harvard’s dean of admissions committed to admitting substantially more black students, as well as “a better representation of a black point of view on the Admissions staff.”

But it would be a decade before those moves would be put into a diversity rubric by the Supreme Court’s decision in 1978 in Regents of the University of California v. Bakke. That case was brought by a white applicant who had been denied admission to UC Davis School of Medicine despite having a higher GPA and better MCAT scores than most of the admitted minority students. After admitting an all-white inaugural class in 1968, the faculty had decided to reserve 16 percent of admission spots for minorities. When Allan Bakke challenged that practice, it looked as though the country’s history had reached a turning point. Either higher-ed admissions would be colorblind, or universities would be free to engage in race-balancing to fulfill their vision of justice.

But neither view held sway. Four justices would have allowed racial preferences “to remedy disadvantages cast on minorities by past racial prejudice,” while four others would have outlawed the consideration of race altogether. The remaining justice, Lewis Powell, voted to invalidate UC Davis’s racial quotas but to allow the use of race as one of many factors to advance what he considered a compelling state interest in having a “diverse student body.” With this decision, Powell planted the seed of the entire diversity conceit, which eventually became a higher priority in higher education than open inquiry and the pursuit of knowledge. Diversity, the word, became divorced from what’s required to create it: an indefinable balance of differences, which in a government or administrative setting means that leaders in pursuit of an indeterminate goal can choose according to their personal whims instead of objective criteria.

Diversity thus became the only permissible justification for universities to make amends for America’s legacy of racial discrimination. According to Robert Comfort, the law clerk who worked most closely on the case, Powell’s appeal to diversity was a deliberate effort to preserve racial preferences despite laws and precedents that explicitly forbade them. Lawyer Mark Mutz and medical professor Richard Gunderman have called it “an obfuscation” designed to remedy the nation’s sins, “a ruse that has resulted in an almost Orwellian distortion of the meaning of diversity” — and in turn of broader social policy. It would have been as if the Founders, instead of protecting religious freedom and diversity, had mandated agnosticism.

A quarter century later, in two cases involving the University of Michigan, the Supreme Court endorsed Powell’s diversity rationale as part of a holistic race-conscious admissions program while rejecting a mechanical system that assigned race a fixed number of points. Grutter v. Bollinger (2003) gave administrators far more leeway to decide what they meant by diversity, allowing it to replace objective standards or ratios with unchecked social engineering. This granted universities the freedom to prioritize racial diversity over any other kind — including religious, intellectual, political, or socioeconomic — while cloaking their actions in legal sophistry.

This process of legal and social redefinition of diversity was paralleled by a political redefinition. The creation of the Equal Employment Opportunity Commission (EEOC) by the Civil Rights Act of 1964, alongside President Lyndon B. Johnson’s Executive Order 11246 of September 1965, meant that discrimination laws would be federally enforced. In an effort to avoid EEOC complaints and fines, corporations and academic institutions started building “diversity” programs themselves. Diversity training began to incorporate principles of social justice and what became known as the “appreciation of differences.”

Nobody seemed satisfied with these new bureaucratic programs. Minorities called them a waste of time, while whites accused their purveyors of “reverse discrimination.” In an effort to defend the programs against such claims, their advocates redefined success. Success wouldn’t be mere diversity or inclusion in the workplace, but a far more ambitious goal that would at once entrench the programs and become the next piece of the diversity puzzle: equality of outcomes, or, as it would become known, “equity.” (Like diversity, equity has a perfectly unobjectionable dictionary definition that boils down to treating people fairly.)


This is how the principle of diversity evolved into DEI. Looking back, we can see three eras in which the groundwork for the full blooming of diversity-plus was developed: The 1960s through the mid-1970s were about integrating racial minorities into workplaces, education, and neighborhoods. This was the era of inclusion. The mid-1970s through the mid-1990s were about multicultural awareness and recognition of minorities and their accomplishments, including dueling metaphors of melting pots and salad bowls. This was the era of diversity. Finally, in the mid-1990s, the two concepts merged — a celebration of diversity, justified by an appeal to inclusion, with the expectation that corporate, educational, and government institutions would reflect the nation’s demographics. This was the era of equity.

The advent of social media facilitated an enforcement mechanism to hold entities “accountable” to this emergent principle. DEI became a financially lucrative industry in both the academic and corporate spheres, with the expansion of training, messaging, and management focused on compliance. Like other industrial complexes, DEI thus took on an employment-driven energy, with certain groups holding financial stakes in this new industry.

And so the diversity-industrial complex captured higher education — among other areas — becoming the tail that wagged the institutional dog. As education-policy researchers Jay Greene and Fredrick Hess put it, “DEI staff operate as a political commissariat, articulating and enforcing a political orthodoxy on campus.”

There’s nothing hidden about that agenda. The National Association of Diversity Officers in Higher Education calls itself “a leading voice in the fight for social justice,” with a plan for “creating a framework for diversity officers to advance anti-racism strategies, particularly anti-Black racism, at their respective institutions of higher education.” This effort “requires confronting systems, organizational structures, policies, practices, behaviors, and attitudes,” the group states. “This active process should seek to redistribute power in an effort to foster equitable outcomes.” Accordingly, in December 2021, the Council for Higher Education Accreditation, which represents 6,000 universities and recognizes 60 accreditors, implemented its first DEI requirement.


But the pandemic years may end up being the high-water mark of America’s DEI obsession. It’s too soon to know the full effects of the Supreme Court’s effective overruling of Bakke and Grutter in the 2023 case Students for Fair Admissions v. President and Fellows of Harvard College. Advocates of race-conscious policies can now drop the “diversity” pretext for their social justice activism, but the justices definitively outlawed the use of naked racial preferences. A legal-historical reading of this development suggests a return to America’s original interest in diversity: allowing it rather than imposing it. It is a rare act of legal self-correction. Those who advocated it deserve credit, as does the judiciary itself.

Still, the Students for Fair Admissions litigation laid bare several uncomfortable truths about the diversity-driven social engineering that Justice Powell spawned. The plaintiffs presented compelling evidence that the oldest private and public universities in the country, Harvard and the University of North Carolina, respectively, used racial preferences to a far greater extent than Grutter allowed. For example, at any given level of academic achievement, the acceptance rate for African-American applicants was many times greater than for whites and especially Asian Americans. At the same time, the number of Asian Americans at elite schools has stayed relatively constant even as their proportion of qualified applicants has exploded. The further irony is that, even as Powell had credited Harvard for its model admissions program, its “holistic” approach had a dark origin in the 1920s and ’30s: to restrict the number of Jewish students.

Perhaps that’s why a majority of justices were skeptical of the arguments for “race-conscious admissions,” as Harvard’s and UNC’s advocates called them. Race-consciousness can go both ways.

Moreover, Grutter itself required that such policies “must be limited in time” and should face “sunset provisions” forcing regular “reviews to determine whether racial preferences are still necessary.” The Harvard and UNC lawyers’ inability to define an end point was telling. What the Court authorized in Grutter was a temporary, grudging exception to America’s equality ideals, but the exception developed into a threat across the legal landscape and society as a whole. Although the case was about university admissions, it had been taken to signal that it may be legally permissible in other contexts to discriminate based on race to achieve some greater good.


But even in the educational setting, the diversity framework in Bakke and Grutter didn’t achieve the benefits its proponents touted. Instead of creating academic communities with a broad mix of perspectives, race-based admissions entrenched wealth and privilege by giving preferences to upper-class blacks and Hispanics over lower-class whites and immigrant Asians. It also led to separate housing facilities, orientation programs, and graduation ceremonies. By the time Students for Fair Admissions came down in June 2023, it was long past time to recognize that Grutter had been a deviation from equal-protection principles and engendered race-balancing under the guise of diversity.

The failure of Harvard and UNC to sustain the legality of their admissions programs also revealed another blind spot inherent in the diversity-industrial complex: its exclusion of Jews. Somehow, America’s great universities were attempting to uphold an inclusion program that had originally been designed to exclude. Let’s not forget that the primary groups negatively affected by the diversity-industrial complex, Jews and Asians, who make up 2.4 percent and 5.9 percent of America’s population, respectively, are minorities themselves — and unquestionably among the most historically oppressed demographic groups.

Within the Jewish community, there exists a diversity of views, so to speak, on how to approach the problem of DEI offices’ inability or unwillingness to remedy antisemitism in academia. Some believe that diversity policies aren’t going anywhere and that Jews should work through them to achieve a more favorable position in the intersectional matrix. Particularly after October 7, however, many are realizing the inherent incompatibility of postmodern identity politics and Jewish survival. The fact is that Jewish achievement in America has led to troubling levels of resentment, especially among the youth. A recent Harvard/Harris poll found that two-thirds of voters ages 18–24 believe that Jews are an oppressor class. Although that view is rejected by broader society, it may be a serious problem when members of Gen Z become our cultural shapers and the gatekeepers of our legal and political institutions.

Given the obvious dangers that Jews face from an ideology that advocates equalities of outcome and demonizes free speech, intellectual rigor, and hard work, there can be no conciliation. With antisemitism baked into bureaucracies that view the world through identitarian lenses, dismantling these illiberal structures is the only way forward.

Nobody of good faith can object to making people of all backgrounds, races, and ethnicities feel welcome and included at their place of work or study, but the diversity-equity-inclusion dogma we’ve experienced in recent years warps those basic words to mean their opposite. It’s one of the bitter ironies of modern American life that DEI stifles intellectual diversity, undermines equal opportunity, and silences dissenting voices. DEI as practiced stands for discrimination, exclusion, and indoctrination — and segregating people in ways that make social justice warriors indistinguishable from white supremacists.

As legislatures and university systems roll back related DEI policies — most notably diversity statements for faculty hiring — the pendulum appears to be swinging back. The new Trump administration has also come out swinging, rescinding not only President Biden’s insertion of DEI commissariats across the federal government but LBJ’s “affirmative action” order of 1965. Assuming such executive actions are enforced by the relevant agencies in the areas of education, employment, contracting, business regulation, and elsewhere, we could see a legal sea change to match the cultural “vibe shift” that the 2024 election revealed. That’s a good thing, because no country can be durably held together by what makes its citizens different, or the celebration of diversity for its own sake. E pluribus unum.