Analysis of Halving Random Search Cross Validation in Machine Learning Model Optimization

Authors

DOI:

https://doi.org/10.55506/icdess.v3i1.169

Keywords:

Halving Random Search Cross-Validation, Hyperparameter Optimization, Machine Learning

Abstract

This study aims to analyze the effectiveness of the Halving Random Search Cross Validation method as an alternative for hyperparameter optimization in machine learning models compared to Grid Search Cross Validation and Random Search Cross Validation. The dataset used is Internet Service Churn with four algorithms: KNN, Decision Tree, SVM, and Gaussian Naive Bayes. The testing process involves 10-fold cross validation and three repetitions to ensure the validity of the results. The experimental results show that Halving Random Search Cross Validation is able to achieve competitive accuracy, precision, and recall performance (difference < 0.5%) compared to Grid Search in most models, with computational time savings of up to 62–74% on KNN, Decision Tree, and SVM. However, on Gaussian Naive Bayes with a small hyperparameter space, this method is slower due to the successive halving overhead. Random Search shows high speed but less stable on SVM and Gaussian Naive Bayes. The research conclusion states that Halving Random Search Cross Validation is the most balanced method for business cases such as churn prediction, with recommendations for application on complex models and further development using Hyperband or Bayesian Optimization

Downloads

Download data is not yet available.

References

Anggaini, I. Y., Sucipto, S., & Indriati, R. (2018). Cyberbullying detection modelling at Twitter social networking. JUITA: Jurnal Informatika, 6(2), 113–118. https://doi.org/10.30595/juita.v6i2.3350 DOI: https://doi.org/10.30595/juita.v6i2.3350

Aprilliandhika, W., & Fauzi Abdullah, F. (2024). Comparison of K-nearest neighbor and support vector machine algorithm optimization with grid search CV on stroke prediction. Journal of Information Technology, 5(4), 991–1000. https://doi.org/10.52436/1.jutif.2024.5.4.1951 DOI: https://doi.org/10.52436/1.jutif.2024.5.4.1951

Bergstra, J., & Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13, 281–305.

Dirjen, S. K., Riset, P., Pengembangan, D., Dikti, R., Irmanda, H. N., & Astriratma, R. (2017). Klasifikasi jenis pantun dengan metode support vector machines (SVM). RESTI, 1(3), 915–922. https://doi.org/10.29207/resti.v4i5.2313 DOI: https://doi.org/10.29207/resti.v4i5.2313

Irawan, I., Qisthiano, R., Syahril, M., & Jakak, P. M. (2023). Optimasi prediksi kelulusan tepat waktu: Studi perbandingan algoritma random forest dan algoritma K-NN berbasis PSO. Jurnal Pengembangan Sistem Informasi dan Informatika, 4(4). https://doi.org/10.47747/jpsii.v4i4.1374 DOI: https://doi.org/10.47747/jpsii.v4i4.1374

Jamiluddin, F., Faisal, S., Arum Puspita Lestari, S., Fauzi, A., ... (2024). Implementasi hyperparameter tuning grid search CV pada prediksi produksi padi menggunakan algoritma linear regresi. Journal of Information System Research, 6(1), 490–498. https://doi.org/10.47065/josh.v6i1.5930 DOI: https://doi.org/10.47065/josh.v6i1.5930

Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., & Talwalkar, A. (2018). Hyperband: A novel bandit-based approach to hyperparameter optimization. Journal of Machine Learning Research, 18(185), 1–52.

Misnawati. (2023). ChatGPT: Keuntungan, risiko, dan penggunaan bijak dalam era kecerdasan buatan. Jurnal Prosiding Mateandrau, 2(1). https://doi.org/10.55606/mateandrau.v2i1.221 DOI: https://doi.org/10.55606/mateandrau.v2i1.221

Muhamad, I., & Matin, M. (2023). Hyperparameter tuning menggunakan GridSearchCV pada random forest untuk deteksi malware. https://doi.org/10.32722/multinetics.v9i1.5578 DOI: https://doi.org/10.32722/multinetics.v9i1.5578

Munawaroh, S., Rosyidah, U. A., & Yanuarti, R. (2024). Klasifikasi tingkat kecemasan atlet sebelum bertanding menggunakan algoritma K-nearest neighbor (KNN) berbasis website. BIOS: Jurnal Teknologi Informasi dan Rekayasa Komputer, 5(2), 87–94. https://doi.org/10.37148/bios.v5i2.120 DOI: https://doi.org/10.37148/bios.v5i2.120

Nila, U., Firliana, R., & Sucipto. (2023). Analisis data transaksi penjualan produk pertanian menggunakan algoritma FP-Growth. Prosiding SEMNAS INOTEK, 7. https://doi.org/10.29407/inotek.v7i1.3426

Nugraha, W., & Sasongko, A. (2022). Hyperparameter tuning pada algoritma klasifikasi dengan grid search. SISTEMASI: Jurnal Sistem Informasi, 11(2). https://doi.org/10.32520/stmsi.v11i2.1750 DOI: https://doi.org/10.32520/stmsi.v11i2.1750

Nugroho, A., Soeleman, Ma., Anggi Pramunendar, R., & Nurhindarto, A. (2023). Peningkatan performa ensemble learning pada segmentasi semantik gambar dengan teknik oversampling untuk class imbalance. https://doi.org/10.25126/jtiik.2023106831 DOI: https://doi.org/10.25126/jtiik.2024106831

Nurjanah, I., Karaman, J., Widaningrum, I., Mustikasari, D., & Sucipto. (2023). Penggunaan algoritma Naive Bayes untuk menentukan pemberian kredit pada koperasi desa. Journal of Computer Science and Information Technology E-ISSN, 3(2). https://doi.org/10.47065/explorerv3i2.766 DOI: https://doi.org/10.47065/explorer.v3i2.766

Putri, T. A. E., Widiharih, T., & Santoso, R. (2023). Penerapan tuning hyperparameter RandomSearchCV pada adaptive boosting untuk prediksi kelangsungan hidup pasien gagal jantung. Jurnal Gaussian, 11(3), 397–406. https://doi.org/10.14710/j.gauss.11.3.397-406 DOI: https://doi.org/10.14710/j.gauss.11.3.397-406

Rahmat, A., Syafiih, M., & Faid, M. (2023). Implementasi klasifikasi potensi penyakit jantung dengan menggunakan metode C4.5 berbasis website (studi kasus Kaggle.com). INFOTECH Journal, 9(2), 393–400. https://doi.org/10.31949/infotech.v9i2.6295 DOI: https://doi.org/10.31949/infotech.v9i2.6295

Sartika, D., & Sensuse, D. I. (2017). Perbandingan algoritma klasifikasi Naive Bayes, nearest neighbour, dan decision tree pada studi kasus pengambilan keputusan pemilihan pola pakaian. JATISI, 3(2). https://doi.org/10.35957/jatisi.v3i2.78

Soper, D. S. (2023). Hyperparameter optimization using successive halving with greedy cross validation. Algorithms, 16(1). https://doi.org/10.3390/a16010017 DOI: https://doi.org/10.3390/a16010017

Sucipto, Prasetya, D. D., & Widiyaningtyas, T. (2025). A supervised hybrid weighting scheme for Bloom's taxonomy questions using category space density-based weighting. Engineering, Technology and Applied Science Research, 15(2), 22102–22108. https://doi.org/10.48084/etasr.10226 DOI: https://doi.org/10.48084/etasr.10226

Downloads

Published

2026-01-18

How to Cite

Alief Cahyo Utomo, Sucipto, S., & Muzaki, M. N. (2026). Analysis of Halving Random Search Cross Validation in Machine Learning Model Optimization. Proceeding International Conference on Digital Education and Social Science, 3(1), 362–369. https://doi.org/10.55506/icdess.v3i1.169