Can you describe a scenario where AI could inadvertently perpetuate discrimination? How would you address it?

Instruction: Identify a potential scenario where AI systems might contribute to discrimination and propose methods to prevent this.

Context: This question probes the candidate's ability to recognize situations where AI systems could lead to biased outcomes and tests their capacity to think critically about solutions to prevent or minimize discrimination.

Official answer available

Preview the opening of the answer, then unlock the full walkthrough.

One example I would use is this: A common example is a resume-screening model trained on historical hiring data from a company that has systematically underselected certain groups. Even if the model never sees a protected attribute directly, it can still learn discriminatory patterns through proxies such as schools, career gaps, geography,...

Related Questions