A Discussion of AI, Eugenics, and Their Risky Futures.

TW: This post contains thoughts and ideas of the author and Dr. Timnit Gebru surrounding topics such at eugenics. Finn’s exploration of the ways AI and unregulated bias mitigation might negatively impact the future of the field and amplify inequality. Notes from the talk given by Dr. Timnit Gebru.
Author

Finn Ellingwood

Published

April 28, 2023

Note: This is a more comprehensive and serious continuation of my previous blogpost.

Temnit Gebru, a prominent AI researcher and advocate for ethical AI, gave a talk at the Middlebury College Conference about Eugenics and the Promise of Utopia through Artificial General Intelligence on April 24th, 2023. Dr. Gebru discussed the exploitation of labor in the development of AI, the historical context of eugenics and its potential resurgence through AI, and the centralization of power in AI development. Her talk serves as a wake-up call for those who have been swept up in the hype surrounding AI and its supposed benefits

Dr. Gebru addressing
students at
Middlebury College,
during her talk
“Eugenics and the Promise
of Utopia through Artificial
General Intelligence.”

Dr. Gebru began by addressing historical development and economic impact of AI, stating that it has the potential to be the greatest force for economic change in our lifetime. However, she cautioned that this potential for change comes with a significant cost. Gebru pointed out that the development of AI has relied heavily on the exploitation of labor, particularly of low-paid workers who annotate and label datasets for machine learning algorithms. These workers often have little job security and no benefits, despite the fact that their work is essential to the development of AI. Gebru argued that the AI industry must take responsibility for this exploitation and work to provide better working conditions and protections for these workers.

Importantly, Dr. Gebru also discussed the historical context of eugenics and its potential resurgence through AI. She pointed out that the eugenics movement of the early 20th century was based on the idea of improving the genetic quality of the human population through selective breeding and sterilization. The movement was based on the belief that certain groups of people were genetically inferior and that their reproduction should be discouraged. Gebru argues that AI has the potential to resurrect these dangerous ideas by perpetuating biases in data and algorithms. She pointed out that the lack of diversity in the tech industry has already resulted in biased algorithms, and that this problem will only get worse unless steps are taken to address it.

Dr. Gebru’s talk also highlighted the issue of centralization of power in AI development. She pointed out that a few large companies, such as OpenAI and Microsoft, are dominating the AI industry and setting the agenda for its development. This centralization of power is a concern because it limits the diversity of perspectives and voices involved in AI development. Gebru argued that the industry must work to decentralize power and encourage a broader range of actors to participate in AI development

Dr. Gebru’s talk serves as a reminder that AI is not a neutral technology. It has the potential to be used for good or for harm, and it is up to us to ensure that it is used in a responsible and ethical manner. The inherently biased nature of specific datasets and those that create them needs to be diligently accounted for. We cannot simply rely on the promises of AI proponents or the hype surrounding the technology. We must critically examine its impact on society and work to mitigate any negative effects.

Critics, on the other hand, argue that these claims are overblown and represent a kind of techno-utopianism that is not based in reality. They point out that while AI has the potential to do a lot of good, it is not a magic solution and comes with its own set of risks and challenges. Connecting to this point, many of these idea of conflating AI to a higher power go back all the way to the first days of eugenics, and that AI will prevail over all bad and malignant forces. Much of this can also be connected with the evolution of historical protestant work ethic into present field of tech companies as a whole.

One of the key concerns with the idea of AI as a kind of religion is the potential for blind faith and the abdication of responsibility. If people begin to see AI as a kind of all-knowing and all-powerful entity, they may become complacent and fail to question its decisions or actions. This could lead to a loss of agency and a dangerous concentration of power in the hands of a few large corporate entities with little regard to their effects on societal bias and diversity outside of its effect on their share price.

Continuing, another one of the most important points that Dr. Gebru made was that AI is not a fully well-defined term. There are many different types of AI, each with its own potential benefits and risks. Some types of AI, such as narrow AI, are already being used in many applications, such as image recognition and natural language processing. However, other types of AI, such as the term “artificial general intelligence” (Sometimes called AGI), are still largely hypothetical. Gebru argued that we must be careful not to conflate different types of AI or overstate their capabilities. Many of these claims are proposed by those who have no experience in the field of artificial intelligence or large model machine learning, and they only makes these claims for the benefit of shareholders and speculators.

Gebru also highlighted the need for those who call themselves “effective altruists” to consider the risks associated with AI development. Effective altruism is a philosophy that advocates for doing the most good possible with one’s resources. However, Gebru argued that many effective altruists have ignored the risks associated with AI development in their pursuit of doing good. She pointed out that AI has the potential to cause significant harm, and that effective altruists must take these risks seriously and work to mitigate.

The logo of the movement known as
“Effective Altruism.”
A movement started in the late 2000s,
which, quoted from their website,
“using evidence and reason to figure out how
to benefit others as much as possible,
and taking action on that basis”

In conclusion, the discussion of Dr. Temnit Gebru’s talk highlights the potential benefits and risks associated with the development and implementation of artificial intelligence. While AI has the potential to revolutionize various industries and improve our quality of life, it also poses significant challenges that require attention from policymakers and regulators.

One of the biggest challenges that AI poses is the risk of perpetuating biases and inequalities. As Gebru noted, AI systems can reproduce existing societal biases and lead to discrimination against marginalized groups. Additionally, the rapid pace of technological advancement means that there is often a lack of regulation and oversight, which can lead to unintended consequences and negative outcomes.

To mitigate these risks and ensure that AI is developed and used in a responsible and ethical manner, it is crucial that policymakers and regulators take action. This can involve developing clear guidelines and standards for the development and deployment of AI systems, as well as establishing regulatory bodies to oversee the implementation of these systems.

Another important area of focus for regulation and legislation is the potential impact of AI on employment and the economy. As AI systems become more advanced, there is a risk that they will displace workers and lead to significant job losses. This requires policymakers to develop strategies to ensure that the benefits of AI are distributed equitably, and that workers are protected and supported through any economic transitions.

Overall, the development of AI as explored by Dr. Gebru is a complex and multifaceted issue that requires careful consideration and action from policymakers and regulators. By developing clear guidelines and standards, and establishing regulatory bodies to oversee the implementation of AI systems, we can ensure that AI is developed and used in a responsible and ethical manner, and that its benefits are distributed equitably. Failure to take action risks perpetuating existing inequalities and exacerbating the challenges that we face as a society. It is therefore crucial that we prioritize the development of effective regulation and legislation that can guide the responsible and ethical use of AI, and ensure that its benefits are shared by all.