Problems with AI and You:
AI Superstar Timnit Gebru on Ethics in AI and Why We Need to Care
Introducing Dr. Timnit Gebru, the computer scientist extraordinaire! She’s like the Hermione Granger of the AI world, except she’s not fictional (and she’s probably better at coding).
Dr. Gebru is a superstar when it comes to algorithmic bias and data mining. She’s the brains behind the Gender Shades study, which exposed the facial recognition software’s tendency to overlook black women (seriously, what’s up with that?). She’s also the co-founder of Black in AI, an organization that’s on a mission to get more black individuals involved in AI. You go, girl!
Dr. Gebru is not just smart, she’s also sassy. She doesn’t shy away from speaking up about the ethical and social implications of AI. In fact, according to her Wikipedia page (which is known for its authentic journalism), she left her job at Google because they wanted her to take her unpublished paper and shove it where the sun don’t shine. (Just kidding, they just wanted her to remove some names. But still, it was a big deal.)
Speaking of ethical concerns, let’s talk about ChatGPT. Middlebury College, like many other institutions, has yet to establish a campus-wide policy regarding the use of ChatGPT and other AI tools. This means that professors are free to use ChatGPT if they want to, which is kind of like giving your students a genie in a bottle (except instead of wishes, they get answers to their essays).
However, ChatGPT comes with its own set of problems. For example, the premium version gives students who can afford it an unfair advantage. And let’s not forget that ChatGPT is trained on a whole lot of language data, which includes some pretty nasty stuff. Developers try to filter out the biased stuff, but it still seeps through. Just like that one drop of ketchup that always ends up on your white shirt.
So what did Dr. Gebru have to say about all of this? Well, she talked about the reliability (or lack thereof) of computer vision technology. Turns out, those facial recognition tools aren’t as accurate as we thought. In fact, they’re pretty biased. And when they’re used to judge people’s personalities and emotions (yes, that’s a thing), it perpetuates structural racism. Oh, and don’t get her started on Faception, which singles out people from marginalized groups as terrorists. Not cool, Faception.
Dr. Gebru also talked about the lack of diversity in datasets. It’s a big problem because it leads to a high disparity in facial recognition results. And even when companies do try to gather diverse information, they end up doing it unethically. It’s like they’re playing Pokemon, except instead of catching them all, they’re catching all the darker skin images and scraping images of transgender YouTubers without their consent.
But here’s the thing: technology is not always used in the way it’s designed. Sometimes it leads to more discrimination and problems than efficiency. So Dr. Gebru suggested that we need to think beyond diversity in datasets and consider structural and real representation. We need to make sure that panels aren’t just filled with people from dominant groups and those closest to money. Because fairness isn’t just about math, it’s about society.
So, what do you think, as some questions for the reader: Should we prohibit or regulate the use of facial recognition on people? And how can computer vision technology help marginalized communities? We’re all ears!