Confronting Bias In Artificial Intelligence : Building Transparent, Diverse, and Ethical AI Systems
Klaassen, Granville (2024)
Klaassen, Granville
2024
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-202501071060
https://urn.fi/URN:NBN:fi:amk-202501071060
Tiivistelmä
As a father and a person of colour, the Netflix documentary Coded Bias (Kantayya, S., 2020) deeply echoed, generating a journey into the historical injustices and systemic biases perpetuated by artificial intelligence. The rapid advancement of AI, particularly Artificial General Intelligence (AGI), presents significant challenges (Hendrycks et al., 2023). The global competition among tech giants raises concerns about the risks of AGI becoming uncontrollable or deviating from human values. This is particularly worrying when considering the future world my daughters might inherit.
The film underscores the critical need for responsible AI development and stringent regulation to ensure it serves as a force for good. It addresses ethical challenges, potential job losses, and societal impacts, emphasizing the importance of balancing AI safety with family values and community well-being. Bias in AI perpetuates systemic inequalities, especially in law enforcement and healthcare. This thesis identifies the origins of these biases and highlights the necessity for ethical oversight, diverse development teams, and transparent algorithms. Without constant vigilance, unchecked biases can become more pronounced, leading to severe consequences. Practical recommendations include implementing unbiased metrics and regulatory frameworks to ensure AI systems serve all communities justly.
The film underscores the critical need for responsible AI development and stringent regulation to ensure it serves as a force for good. It addresses ethical challenges, potential job losses, and societal impacts, emphasizing the importance of balancing AI safety with family values and community well-being. Bias in AI perpetuates systemic inequalities, especially in law enforcement and healthcare. This thesis identifies the origins of these biases and highlights the necessity for ethical oversight, diverse development teams, and transparent algorithms. Without constant vigilance, unchecked biases can become more pronounced, leading to severe consequences. Practical recommendations include implementing unbiased metrics and regulatory frameworks to ensure AI systems serve all communities justly.