Main Article Content
Abstract
Artificial intelligence and computer science will converge to produce the next set of emerging technologies. Artificial intelligence enables machines to learn, make intelligent decisions, and adapt, but it is built on the foundations of computer science, including algorithms, data structures, optimization, complexity theory, and computing platforms. This theoretical review examines the role of these foundations in modern artificial intelligence systems and their contributions to the development of scalable, explainable, efficient, and deployable technologies. The paper discusses the technical foundations of machine learning, deep learning, graph-based intelligence, neuro-symbolic systems, explainable artificial intelligence, and edge intelligence in terms of algorithmic reasoning, data structures, learning optimization, computational efficiency, and computational infrastructure. It also discusses the role of integrating artificial intelligence and computer science across major application areas, including smart health, cybersecurity, robotics, natural language processing, Internet of Things systems, edge computing, and sustainable digital infrastructure. To improve the paper's conceptual framework, it proposes an Algorithm-to-Intelligence Integration Framework that connects computer science foundations, artificial intelligence paradigms, system requirements, application domains, and future technologies. The survey finds that intelligent systems should, in the future, combine adaptive learning with robust computational design to achieve responsible, secure, sustainable, and deployable technological advancement.
Keywords
Article Details
References
- D. Silver et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016, doi: 10.1038/nature16961.
- I. H. Sarker, “Machine Learning: Algorithms, Real-World Applications and Research Directions,” SN COMPUT. SCI., vol. 2, no. 3, p. 160, Mar. 2021, doi: 10.1007/s42979-021-00592-x.
- Y. Xu et al., “Artificial intelligence: A powerful paradigm for scientific research,” Innovation, vol. 2, no. 4, Nov. 2021, doi: 10.1016/j.xinn.2021.100179.
- M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst, “Geometric Deep Learning: Going beyond Euclidean data,” IEEE Signal Processing Magazine, vol. 34, no. 4, pp. 18–42, Jul. 2017, doi: 10.1109/MSP.2017.2693418.
- T. N. Kipf and M. Welling, “Semi-Supervised Classification with Graph Convolutional Networks,” Feb. 22, 2017, arXiv: arXiv:1609.02907. doi: 10.48550/arXiv.1609.02907.
- Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A Comprehensive Survey on Graph Neural Networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 1, pp. 4–24, Jan. 2021, doi: 10.1109/TNNLS.2020.2978386.
- S. Ruder, “An overview of gradient descent optimization algorithms,” Jun. 15, 2017, arXiv: arXiv:1609.04747. doi: 10.48550/arXiv.1609.04747.
- S. Han, H. Mao, and W. J. Dally, “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding,” Feb. 15, 2016, arXiv: arXiv:1510.00149. doi: 10.48550/arXiv.1510.00149.
- T. Ben-Nun and T. Hoefler, “Demystifying Parallel and Distributed Deep Learning: An In-depth Concurrency Analysis,” ACM Comput. Surv., vol. 52, no. 4, p. 65:1-65:43, Aug. 2019, doi: 10.1145/3320060.
- L. Li, Y. Fan, M. Tse, and K.-Y. Lin, “A review of applications in federated learning,” Computers & Industrial Engineering, vol. 149, p. 106854, Nov. 2020, doi: 10.1016/j.cie.2020.106854.
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), J. Burstein, C. Doran, and T. Solorio, Eds., Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp. 4171–4186. doi: 10.18653/v1/N19-1423.
- T. Brown et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
- R. Bommasani et al., “On the Opportunities and Risks of Foundation Models,” arXiv.org. Accessed: Apr. 30, 2026. [Online]. Available: https://arxiv.org/abs/2108.07258v3
- G. Marcus, “Deep Learning: A Critical Appraisal,” arXiv.org. Accessed: Apr. 30, 2026. [Online]. Available: https://arxiv.org/abs/1801.00631v1
- L. D. Raedt, S. Dumančić, R. Manhaeve, and G. Marra, “From Statistical Relational to Neuro-Symbolic Artificial Intelligence,” Mar. 24, 2020, arXiv: arXiv:2003.08316. doi: 10.48550/arXiv.2003.08316.
- A. d’Avila Garcez and L. C. Lamb, “Neurosymbolic AI: the 3rd wave,” Artif Intell Rev, vol. 56, no. 11, pp. 12387–12406, Nov. 2023, doi: 10.1007/s10462-023-10448-w.
- F. Doshi-Velez and B. Kim, “Towards A Rigorous Science of Interpretable Machine Learning,” Mar. 02, 2017, arXiv: arXiv:1702.08608. doi: 10.48550/arXiv.1702.08608.
- W. Samek, T. Wiegand, and K.-R. Müller, “Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models,” Aug. 28, 2017, arXiv: arXiv:1708.08296. doi: 10.48550/arXiv.1708.08296.
- R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A Survey of Methods for Explaining Black Box Models,” ACM Comput. Surv., vol. 51, no. 5, p. 93:1-93:42, Aug. 2018, doi: 10.1145/3236009.
- C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nat Mach Intell, vol. 1, no. 5, pp. 206–215, May 2019, doi: 10.1038/s42256-019-0048-x.
- N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A Survey on Bias and Fairness in Machine Learning,” ACM Comput. Surv., vol. 54, no. 6, p. 115:1-115:35, Jul. 2021, doi: 10.1145/3457607.
- I. D. Raji et al., “Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, in FAT* ’20. New York, NY, USA: Association for Computing Machinery, Jan. 2020, pp. 33–44. doi: 10.1145/3351095.3372873.
- K.-H. Yu, A. L. Beam, and I. S. Kohane, “Artificial intelligence in healthcare,” Nat Biomed Eng, vol. 2, no. 10, pp. 719–731, Oct. 2018, doi: 10.1038/s41551-018-0305-z.
- E. J. Topol, “High-performance medicine: the convergence of human and artificial intelligence,” Nat Med, vol. 25, no. 1, pp. 44–56, Jan. 2019, doi: 10.1038/s41591-018-0300-7.
- A. Esteva et al., “A guide to deep learning in healthcare,” Nat Med, vol. 25, no. 1, pp. 24–29, Jan. 2019, doi: 10.1038/s41591-018-0316-z.
- X. Chen, H. Xie, X. Tao, F. L. Wang, M. Leng, and B. Lei, “Artificial intelligence and multimodal data fusion for smart healthcare: topic modeling and bibliometrics,” Artif Intell Rev, vol. 57, no. 4, p. 91, Mar. 2024, doi: 10.1007/s10462-024-10712-7.
- A. L. Buczak and E. Guven, “A Survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection,” IEEE Communications Surveys & Tutorials, vol. 18, no. 2, pp. 1153–1176, 2016, doi: 10.1109/COMST.2015.2494502.
- D. S. Berman, A. L. Buczak, J. S. Chavis, and C. L. Corbett, “A Survey of Deep Learning Methods for Cyber Security,” Information, vol. 10, no. 4, p. 122, Apr. 2019, doi: 10.3390/info10040122.
- I. H. Sarker, M. H. Furhad, and R. Nowrozy, “AI-Driven Cybersecurity: An Overview, Security Intelligence Modeling and Research Directions,” SN COMPUT. SCI., vol. 2, no. 3, p. 173, Mar. 2021, doi: 10.1007/s42979-021-00557-0.
- E. Hashmi, M. M. Yamin, and S. Y. Yayilgan, “Securing tomorrow: a comprehensive survey on the synergy of Artificial Intelligence and information security,” AI Ethics, vol. 5, no. 3, pp. 1911–1929, Jun. 2025, doi: 10.1007/s43681-024-00529-z.
- L. Tai, G. Paolo, and M. Liu, “Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sep. 2017, pp. 31–36. doi: 10.1109/IROS.2017.8202134.
- L. Brunke et al., “Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 5, no. Volume 5, 2022, pp. 411–444, May 2022, doi: 10.1146/annurev-control-042920-020211.
- D. J. Yeong, K. Panduru, and J. Walsh, “Exploring the Unseen: A Survey of Multi-Sensor Fusion and the Role of Explainable AI (XAI) in Autonomous Vehicles,” Sensors, vol. 25, no. 3, p. 856, Jan. 2025, doi: 10.3390/s25030856.
- D. W. Otter, J. R. Medina, and J. K. Kalita, “A Survey of the Usages of Deep Learning for Natural Language Processing,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 2, pp. 604–624, Feb. 2021, doi: 10.1109/TNNLS.2020.2979670.
- T. Wolf et al., “Transformers: State-of-the-Art Natural Language Processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Q. Liu and D. Schlangen, Eds., Online: Association for Computational Linguistics, Oct. 2020, pp. 38–45. doi: 10.18653/v1/2020.emnlp-demos.6.
- W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge Computing: Vision and Challenges,” IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, Oct. 2016, doi: 10.1109/JIOT.2016.2579198.
- Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, “Edge Intelligence: Paving the Last Mile of Artificial Intelligence With Edge Computing,” Proceedings of the IEEE, vol. 107, no. 8, pp. 1738–1762, Aug. 2019, doi: 10.1109/JPROC.2019.2918951.
- A. Bourechak, O. Zedadra, M. N. Kouahla, A. Guerrieri, H. Seridi, and G. Fortino, “At the Confluence of Artificial Intelligence and Edge Computing in IoT-Based Applications: A Review and New Perspectives,” Sensors, vol. 23, no. 3, p. 1639, Jan. 2023, doi: 10.3390/s23031639.
- R. Schwartz, J. Dodge, N. A. Smith, and O. Etzioni, “Green AI,” Commun. ACM, vol. 63, no. 12, pp. 54–63, Nov. 2020, doi: 10.1145/3381831.
- R. Vinuesa et al., “The role of artificial intelligence in achieving the Sustainable Development Goals,” Nat Commun, vol. 11, no. 1, p. 233, Jan. 2020, doi: 10.1038/s41467-019-14108-y.
- D. Rolnick et al., “Tackling Climate Change with Machine Learning,” ACM Comput. Surv., vol. 55, no. 2, p. 42:1-42:96, Feb. 2022, doi: 10.1145/3485128.
- L. H. Kaack, P. L. Donti, E. Strubell, G. Kamiya, F. Creutzig, and D. Rolnick, “Aligning artificial intelligence with climate change mitigation,” Nat. Clim. Chang., vol. 12, no. 6, pp. 518–527, Jun. 2022, doi: 10.1038/s41558-022-01377-7.
- A. Jobin, M. Ienca, and E. Vayena, “The global landscape of AI ethics guidelines,” Nat Mach Intell, vol. 1, no. 9, pp. 389–399, Sep. 2019, doi: 10.1038/s42256-019-0088-2.
- L. Floridi and J. Cowls, “A Unified Framework of Five Principles for AI in Society,” in Machine Learning and the City, John Wiley & Sons, Ltd, 2022, pp. 535–545. doi: 10.1002/9781119815075.ch45.
References
D. Silver et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016, doi: 10.1038/nature16961.
I. H. Sarker, “Machine Learning: Algorithms, Real-World Applications and Research Directions,” SN COMPUT. SCI., vol. 2, no. 3, p. 160, Mar. 2021, doi: 10.1007/s42979-021-00592-x.
Y. Xu et al., “Artificial intelligence: A powerful paradigm for scientific research,” Innovation, vol. 2, no. 4, Nov. 2021, doi: 10.1016/j.xinn.2021.100179.
M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst, “Geometric Deep Learning: Going beyond Euclidean data,” IEEE Signal Processing Magazine, vol. 34, no. 4, pp. 18–42, Jul. 2017, doi: 10.1109/MSP.2017.2693418.
T. N. Kipf and M. Welling, “Semi-Supervised Classification with Graph Convolutional Networks,” Feb. 22, 2017, arXiv: arXiv:1609.02907. doi: 10.48550/arXiv.1609.02907.
Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A Comprehensive Survey on Graph Neural Networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 1, pp. 4–24, Jan. 2021, doi: 10.1109/TNNLS.2020.2978386.
S. Ruder, “An overview of gradient descent optimization algorithms,” Jun. 15, 2017, arXiv: arXiv:1609.04747. doi: 10.48550/arXiv.1609.04747.
S. Han, H. Mao, and W. J. Dally, “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding,” Feb. 15, 2016, arXiv: arXiv:1510.00149. doi: 10.48550/arXiv.1510.00149.
T. Ben-Nun and T. Hoefler, “Demystifying Parallel and Distributed Deep Learning: An In-depth Concurrency Analysis,” ACM Comput. Surv., vol. 52, no. 4, p. 65:1-65:43, Aug. 2019, doi: 10.1145/3320060.
L. Li, Y. Fan, M. Tse, and K.-Y. Lin, “A review of applications in federated learning,” Computers & Industrial Engineering, vol. 149, p. 106854, Nov. 2020, doi: 10.1016/j.cie.2020.106854.
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), J. Burstein, C. Doran, and T. Solorio, Eds., Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp. 4171–4186. doi: 10.18653/v1/N19-1423.
T. Brown et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
R. Bommasani et al., “On the Opportunities and Risks of Foundation Models,” arXiv.org. Accessed: Apr. 30, 2026. [Online]. Available: https://arxiv.org/abs/2108.07258v3
G. Marcus, “Deep Learning: A Critical Appraisal,” arXiv.org. Accessed: Apr. 30, 2026. [Online]. Available: https://arxiv.org/abs/1801.00631v1
L. D. Raedt, S. Dumančić, R. Manhaeve, and G. Marra, “From Statistical Relational to Neuro-Symbolic Artificial Intelligence,” Mar. 24, 2020, arXiv: arXiv:2003.08316. doi: 10.48550/arXiv.2003.08316.
A. d’Avila Garcez and L. C. Lamb, “Neurosymbolic AI: the 3rd wave,” Artif Intell Rev, vol. 56, no. 11, pp. 12387–12406, Nov. 2023, doi: 10.1007/s10462-023-10448-w.
F. Doshi-Velez and B. Kim, “Towards A Rigorous Science of Interpretable Machine Learning,” Mar. 02, 2017, arXiv: arXiv:1702.08608. doi: 10.48550/arXiv.1702.08608.
W. Samek, T. Wiegand, and K.-R. Müller, “Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models,” Aug. 28, 2017, arXiv: arXiv:1708.08296. doi: 10.48550/arXiv.1708.08296.
R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A Survey of Methods for Explaining Black Box Models,” ACM Comput. Surv., vol. 51, no. 5, p. 93:1-93:42, Aug. 2018, doi: 10.1145/3236009.
C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nat Mach Intell, vol. 1, no. 5, pp. 206–215, May 2019, doi: 10.1038/s42256-019-0048-x.
N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A Survey on Bias and Fairness in Machine Learning,” ACM Comput. Surv., vol. 54, no. 6, p. 115:1-115:35, Jul. 2021, doi: 10.1145/3457607.
I. D. Raji et al., “Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, in FAT* ’20. New York, NY, USA: Association for Computing Machinery, Jan. 2020, pp. 33–44. doi: 10.1145/3351095.3372873.
K.-H. Yu, A. L. Beam, and I. S. Kohane, “Artificial intelligence in healthcare,” Nat Biomed Eng, vol. 2, no. 10, pp. 719–731, Oct. 2018, doi: 10.1038/s41551-018-0305-z.
E. J. Topol, “High-performance medicine: the convergence of human and artificial intelligence,” Nat Med, vol. 25, no. 1, pp. 44–56, Jan. 2019, doi: 10.1038/s41591-018-0300-7.
A. Esteva et al., “A guide to deep learning in healthcare,” Nat Med, vol. 25, no. 1, pp. 24–29, Jan. 2019, doi: 10.1038/s41591-018-0316-z.
X. Chen, H. Xie, X. Tao, F. L. Wang, M. Leng, and B. Lei, “Artificial intelligence and multimodal data fusion for smart healthcare: topic modeling and bibliometrics,” Artif Intell Rev, vol. 57, no. 4, p. 91, Mar. 2024, doi: 10.1007/s10462-024-10712-7.
A. L. Buczak and E. Guven, “A Survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection,” IEEE Communications Surveys & Tutorials, vol. 18, no. 2, pp. 1153–1176, 2016, doi: 10.1109/COMST.2015.2494502.
D. S. Berman, A. L. Buczak, J. S. Chavis, and C. L. Corbett, “A Survey of Deep Learning Methods for Cyber Security,” Information, vol. 10, no. 4, p. 122, Apr. 2019, doi: 10.3390/info10040122.
I. H. Sarker, M. H. Furhad, and R. Nowrozy, “AI-Driven Cybersecurity: An Overview, Security Intelligence Modeling and Research Directions,” SN COMPUT. SCI., vol. 2, no. 3, p. 173, Mar. 2021, doi: 10.1007/s42979-021-00557-0.
E. Hashmi, M. M. Yamin, and S. Y. Yayilgan, “Securing tomorrow: a comprehensive survey on the synergy of Artificial Intelligence and information security,” AI Ethics, vol. 5, no. 3, pp. 1911–1929, Jun. 2025, doi: 10.1007/s43681-024-00529-z.
L. Tai, G. Paolo, and M. Liu, “Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sep. 2017, pp. 31–36. doi: 10.1109/IROS.2017.8202134.
L. Brunke et al., “Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 5, no. Volume 5, 2022, pp. 411–444, May 2022, doi: 10.1146/annurev-control-042920-020211.
D. J. Yeong, K. Panduru, and J. Walsh, “Exploring the Unseen: A Survey of Multi-Sensor Fusion and the Role of Explainable AI (XAI) in Autonomous Vehicles,” Sensors, vol. 25, no. 3, p. 856, Jan. 2025, doi: 10.3390/s25030856.
D. W. Otter, J. R. Medina, and J. K. Kalita, “A Survey of the Usages of Deep Learning for Natural Language Processing,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 2, pp. 604–624, Feb. 2021, doi: 10.1109/TNNLS.2020.2979670.
T. Wolf et al., “Transformers: State-of-the-Art Natural Language Processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Q. Liu and D. Schlangen, Eds., Online: Association for Computational Linguistics, Oct. 2020, pp. 38–45. doi: 10.18653/v1/2020.emnlp-demos.6.
W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge Computing: Vision and Challenges,” IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, Oct. 2016, doi: 10.1109/JIOT.2016.2579198.
Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, “Edge Intelligence: Paving the Last Mile of Artificial Intelligence With Edge Computing,” Proceedings of the IEEE, vol. 107, no. 8, pp. 1738–1762, Aug. 2019, doi: 10.1109/JPROC.2019.2918951.
A. Bourechak, O. Zedadra, M. N. Kouahla, A. Guerrieri, H. Seridi, and G. Fortino, “At the Confluence of Artificial Intelligence and Edge Computing in IoT-Based Applications: A Review and New Perspectives,” Sensors, vol. 23, no. 3, p. 1639, Jan. 2023, doi: 10.3390/s23031639.
R. Schwartz, J. Dodge, N. A. Smith, and O. Etzioni, “Green AI,” Commun. ACM, vol. 63, no. 12, pp. 54–63, Nov. 2020, doi: 10.1145/3381831.
R. Vinuesa et al., “The role of artificial intelligence in achieving the Sustainable Development Goals,” Nat Commun, vol. 11, no. 1, p. 233, Jan. 2020, doi: 10.1038/s41467-019-14108-y.
D. Rolnick et al., “Tackling Climate Change with Machine Learning,” ACM Comput. Surv., vol. 55, no. 2, p. 42:1-42:96, Feb. 2022, doi: 10.1145/3485128.
L. H. Kaack, P. L. Donti, E. Strubell, G. Kamiya, F. Creutzig, and D. Rolnick, “Aligning artificial intelligence with climate change mitigation,” Nat. Clim. Chang., vol. 12, no. 6, pp. 518–527, Jun. 2022, doi: 10.1038/s41558-022-01377-7.
A. Jobin, M. Ienca, and E. Vayena, “The global landscape of AI ethics guidelines,” Nat Mach Intell, vol. 1, no. 9, pp. 389–399, Sep. 2019, doi: 10.1038/s42256-019-0088-2.
L. Floridi and J. Cowls, “A Unified Framework of Five Principles for AI in Society,” in Machine Learning and the City, John Wiley & Sons, Ltd, 2022, pp. 535–545. doi: 10.1002/9781119815075.ch45.