AI, Pay, and Accountability: What This Course Changed for Me
- Chanele Clark
- Dec 18, 2025
- 2 min read

For my final paper, I wrote about AI and compensation benchmarking because comp is one of those HR areas where the stakes are automatically high. People don’t experience compensation as a “process.” They experience it as fairness. They experience it as respect. They experience it as, “Do y’all value me or not?” So, when we bring AI into benchmarking, especially for job matching and market pricing, I wanted to look past the shiny promise of speed and consistency and ask the bigger question: what are we actually scaling?
This class helped me connect the dots between AI in HR and AI as a whole. One of the lessons I’m taking with me is that AI doesn’t remove human bias; it can automate it. If the model is learning from market data or historical pay practices that reflect inequity (and let’s be real… they often do), the tool can reinforce gaps while still looking “objective” on paper. And because it’s wrapped in analytics language and vendor confidence, it can be easy for organizations to accept outputs without challenging them.
Algorithmic Responsibility also pushed me to think more clearly about transparency and accountability. If HR can’t explain why an AI tool landed on a particular benchmark, job match, or range, then we’re not really practicing good HR; we’re outsourcing judgment. That’s especially risky in comp, where trust is everything and small differences add up over time.
Overall, this course didn’t make me anti-AI. It made me more intentional. I’m leaving with a stronger mindset around guardrails: validating outcomes, monitoring for adverse patterns, asking better questions of vendors, and keeping humans responsible for the final decisions. AI can absolutely support better pay practices, but only if we treat it like a tool that needs governance, not a shortcut that replaces it.



Comments