Padinhakara, Mohammed Javed (2025) Algorithmic equity: Developing safeguards against societal bias in autonomous vehicle systems. World Journal of Advanced Research and Reviews, 26 (3). pp. 983-990. ISSN 2581-9615
![WJARR-2025-2171.pdf [thumbnail of WJARR-2025-2171.pdf]](https://eprint.scholarsrepository.com/style/images/fileicons/text.png)
WJARR-2025-2171.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.
Abstract
The integration of autonomous vehicle (AV) technology with societal structures necessitates robust safeguards against algorithmic bias in transportation systems. Through systemic evaluation of bias sources including training data limitations, model architecture constraints, and implementation oversights potential pathways emerge through which AVs may perpetuate or amplify existing social inequities. A comprehensive three-pillar framework addresses these challenges: robust data governance protocols, lifecycle-integrated ethical principles, and dynamic monitoring mechanisms. Practical insights from documented incidents and successful interventions inform implementation strategies across diverse contexts. The proposed structure emphasizes stakeholder participation across communities and disciplines, recognizing that technological solutions alone cannot address the complex sociotechnical dimensions of fair mobility systems. This article contributes to emerging discourse on responsible AI deployment in public infrastructure, offering actionable strategies for aligning autonomous systems with principles of equity and inclusion in increasingly automated urban environments.
Item Type: | Article |
---|---|
Official URL: | https://doi.org/10.30574/wjarr.2025.26.3.2171 |
Uncontrolled Keywords: | Autonomous Vehicles; Algorithmic Bias; Ethical AI; Mobility Justice; Sociotechnical Systems |
Depositing User: | Editor WJARR |
Date Deposited: | 20 Aug 2025 12:19 |
Related URLs: | |
URI: | https://eprint.scholarsrepository.com/id/eprint/4035 |