Differential Item Functioning: Detecting Item Bias

Dec 20, 2025 min read
--- title: "Applied Psychometrics: Tools, Workflows, and Checklists" date: "2025-12-20" draft: false tags: ["blog"] summary: "A practical look at SEM: avoid fit-chasing, build defensible models, and report results people can trust." description: "A practical look at SEM: avoid fit-chasing, build defensible models, and report results people can trust." image: "/images/blogs/sem.jpg" --- My Heartbeat Behind Structural Equation Modeling (SEM) | Fatih Ozkan

My Heartbeat Behind Structural Equation Modeling (SEM)

Fatih Ozkan | Mar 26, 2023 min read

SEM has a reputation. It can feel like wizardry, a little intimidating, and suspiciously good at producing clean diagrams and messy arguments at the same time. I used to treat it like a fancy calculator. That was a mistake.

These days, I think about SEM as a discipline of making assumptions visible. You write down what you believe about a construct, a process, or a system, then you test how well that story holds together in data. Not perfectly, not magically, but honestly.

One of the moments that pushed me into taking SEM more seriously happened while I was in Turkey. I kept running into the same pattern in conversations and in datasets: people were arguing about “the thing” (motivation, anxiety, belonging, engagement) without agreeing on how it was being measured. SEM gave me a language for that mismatch, and a way to handle it without pretending it didn’t exist.

The Temptation: Fit-Chasing

I get why people fall into it. SEM software gives you numbers that look like a scoreboard: CFI, TLI, RMSEA, SRMR. If you tweak the model, the scoreboard sometimes improves. It can feel like progress.

The trap is when the model becomes a game of “make the fit prettier,” instead of “make the explanation clearer.” Modification indices become a slot machine, and suddenly you’re adding paths you can’t defend, just because the output suggested it.

When SEM turns into a transaction between you and a fit index, everybody loses. Especially the reader.

Partnership: Measurement Meets Structure

What I love about SEM is the two-part handshake it forces you to make:

  • Measurement model: What are my indicators, and what do they actually measure?
  • Structural model: Given that measurement, what relationships am I claiming among constructs?

That separation is the whole point. SEM is not just regression with extra steps. It’s an explicit commitment to the idea that measurement error exists, constructs are imperfectly observed, and you don’t get to ignore that just because the regression output “looks fine.”1

An Invitation to Co-Model

If you’re learning SEM, here’s the posture that helped me most: write your model as if you’re going to have to defend it to a skeptical friend who is smart and slightly annoyed. Not hostile, just unimpressed by vibes.

That means:

  • Define constructs in plain language before you draw arrows.
  • State identification choices (and why) instead of assuming they’re obvious.
  • Report what you tried, not only what “worked.”
  • Prefer theory-driven changes over output-driven changes.

In practice, this usually pushes you toward simpler, more interpretable models that still tell a real story.

BOLD Models

The “bold” move in SEM is often not adding complexity. It’s the opposite.

It’s saying, “This is my model, these are my assumptions, and this is the evidence I have.” Then letting the results be what they are. No drama. No cosmetic surgery for fit.

When I do allow changes, I try to treat them like a lab notebook entry: what changed, why I changed it, what I expected, and what happened afterward.

Nice Theory, but What About Practice?

My practical SEM workflow is basically a loop:

  1. Specify: Write the model you actually believe (before seeing fit indices).
  2. Identify: Confirm the model is identified and estimable.
  3. Estimate: Choose an estimator that matches the data (continuous vs ordinal, non-normality, missingness).
  4. Evaluate: Look at global fit and local diagnostics, and also look at parameter estimates for sanity.
  5. Revise: Make small, defensible changes, then explain them.
  6. Report: Tell the full story, including alternatives you considered.

Fit indices matter, but they’re not a jury verdict. They’re more like smoke alarms: useful, sometimes annoying, and not the whole picture.2

How I Keep Myself Honest

These are the guardrails I use to avoid “SEM by vibes”:

  • Pre-commitment: I write down the primary model before I start tuning.
  • Transparency: If I explored alternatives, I say so, and I describe them.
  • Robustness: I check whether conclusions change under reasonable estimator or specification choices.
  • Measurement sanity: I don’t rush past the measurement model, especially when constructs are used as predictors or outcomes.
  • Fit with context: I interpret fit indices in light of sample size, model complexity, and measurement quality, not as universal pass/fail numbers.3

None of this makes SEM “easy,” but it makes it defensible. And defensible is the whole game.

Closing Notes

If you’re learning SEM right now and it feels like you’re constantly missing something, that’s normal. SEM is less like memorizing formulas and more like building taste, the ability to tell the difference between a model that is merely flexible and a model that is actually meaningful.

I’m still learning. But SEM has become one of my favorite ways to force clarity, both in my own thinking and in what I can communicate to other people.


- Fatih


  1. Bollen, K. A. (1989). Structural equations with latent variables. Wiley. ↩︎

  2. Kline, R. B. (2016). Principles and practice of structural equation modeling (4th ed.). Guilford Press. ↩︎

  3. Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. ↩︎