What Simone de Beauvoir's 1949 Philosophy Teaches Us About Effective AI Governance

I’ve spent the past few years wrestling with the complexities of AI governance. Specifically, how to create effective processes and teams that are small and agile as opposed to complex and monolithic. Recently, I found an insight from an unexpected source: Simone de Beauvoir’s philosophical framework on categorization based on her book, _The Second Sex _(French: Le Deuxième Sexe), published in 1949.

As I researched her idea in preparation for episode 04 of my podcast, I realized that our approach to AI governance falls into exactly the trap Beauvoir warned about decades ago—compartmentalization instead of integration, resulting in artificial separations that don’t reflect reality. We tend to carve technology into artificial regulatory silos: safety, privacy, fairness, etc., creating borders that the technology itself simply doesn’t recognize. These aren’t just organizational choices; they’re bureaucratic limitations that potentially blind us to how these systems actually function in the real world.

Here’s a real-world example: autonomous vehicles. Modern autonomous vehicles don’t just drive themselves; they also collect massive amounts of personal data, including biometric data. A driver monitoring system that tracks eye movements is simultaneously a safety feature and a biometric data collection system. The mapping technology that helps the car navigate safely also creates detailed records of public and private spaces. They monitor driver alertness through eye-tracking cameras, record voice commands, and capture detailed mapping information of neighborhoods and private property. Yet, we regulate these as entirely separate domains through different agencies. These compartmentalized approaches potentially create blind spots where significant ethical issues aren’t adequately addressed by any single regulatory domain.

What companies can do today:

While we wait for regulatory frameworks to evolve, forward-thinking companies can take immediate steps to bridge these artificial divides:

  1. Restructure AI ethics reviews to include cross-functional teams from the start, not as discrete challenges. When privacy, safety, fairness, and technical experts collaborate from day one, they identify intersection issues before they become embedded in the architecture.
  2. Create a conceptual “boundaries map” for every AI system or product and explicitly document where your product operates across traditional boundaries and what risks emerge at these intersections.
  3. Designate an “integration team” whose specific responsibility is to identify impacts that cross traditional domain boundaries, giving accountability to issues that would otherwise fall through organizational cracks. This team can be a subset of the larger AI governance team and representative of the overall domain.
  4. Develop evaluation frameworks that assess how well your governance processes identify and address cross-domain issues, not just compliance within individual domains.

What I’ve come to believe is that simple consolidation isn’t the answer - we’ve seen how merging agencies into monoliths often creates bigger bureaucracies without solving the underlying problem. Instead, we need interdisciplinary approaches. I’m not suggesting Beauvoir had all the answers for 21st-century technology challenges. However, her framework helps shine a light on an important issue: our biggest obstacle may not be technical complexity but our own organizational habits that separate what should be integrated.