Coins multi-class classification using vision TensorFlow

Alawnah, AlMo’men Bellah and Hayajnah, Ola (2025) Coins multi-class classification using vision TensorFlow. International Journal of Science and Research Archive, 14 (3). pp. 1017-1025. ISSN 2582-8185

[thumbnail of IJSRA-2025-0777.pdf] Article PDF
IJSRA-2025-0777.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.

Download ( 510kB)

Abstract

Coin classification is challenging but crucial for various applications such as vending machines, cash registers, and self-service kiosks. Coins are prevalent daily in banks, grocery stores, malls, supermarkets, and ATMs. Therefore, it is essential to have the capability to recognize coins with high accuracy automatically. Deep learning image processing models have recently shown promise in resolving the coin classification problem. These models can learn to identify and classify coins based on visual features such as shape, size, and texture. However, it is not easy as many coins appear similar, making it difficult to distinguish between different types of coins and classify them accurately. This paper proposes a coin classification system that utilizes the popular open-source library Vision Tensor Flow, which is excellent for image processing and computer vision. Our system is designed to handle multi-class classification of coins, which means it can recognize and classify multiple types of coins simultaneously. We tested our system using a dataset of various coin types from different countries, and the results were promising- it can achieve high accuracy in coin recognition. We use the Czech coins dataset to classify the coin images; we use ImageNet-21K as a pre-trained model to help our model enhance accuracy, and we train our model on VITIn.

Item Type: Article
Official URL: https://doi.org/10.30574/ijsra.2025.14.3.0777
Uncontrolled Keywords: Coin classification; ViT; Deep learning and vision transformer; Countries
Depositing User: Editor IJSRA
Date Deposited: 17 Jul 2025 16:29
Related URLs:
URI: https://eprint.scholarsrepository.com/id/eprint/1164