EXPLAINABLE MACHINE LEARNING FOR FINANCIAL ANALYSIS
Abstract
Technical analysis plays a vital role in the dynamic world of stock trading by helping traders identify patterns and trends in market prices. However, interpreting these patterns can be complicated, especially for novice traders, due to the complexity of charts and indicators involved. This research introduces a user-friendly tool aimed at making technical analysis more accessible. It provides a streamlined interface for analysing and forecasting stock trends, catering to users of varying expertise. Most existing trading platforms either lack predictive features or operate as opaque 'black-box' systems, which can deter beginner users. To address this, the research developed a solution that simplifies financial analysis and enhances clarity. This tool stands out by reducing reliance on complex visuals, improving user understanding of financial indicators, and explaining how they influence market predictions. A unique feature of the tool is its use of Explainable Artificial Intelligence (XAI), which provides transparency and builds user trust. Feedback from user testing was positive, with participants highlighting the tool’s clarity and interactivity as valuable for making informed investment decisions.
Zenodo DOI:- https://doi.org/10.5281/zenodo.16566789Keywords
References
Most read articles by the same author(s)
- Olabayoji Oluwatofunmi Oladepo., Explainable Artificial Intelligence in Socio-Technical Contexts: Addressing Bias, Trust, and Interpretability for Responsible Deployment , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 09 (2025): Volume 02 Issue 09
Similar Articles
- Dr. Larian D. Venorth, Prof. Elias J. Vance, A Machine Learning Approach to Identifying Maternal Risk Factors for Congenital Heart Disease , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 08 (2025): Volume 02 Issue 08
- Dr. Alejandro Moreno, An Explainable, Context-Aware Zero-Trust Identity Architecture for Continuous Authentication in Hybrid Device Ecosystems , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 11 (2025): Volume 02 Issue 11
- Olabayoji Oluwatofunmi Oladepo., Explainable Artificial Intelligence in Socio-Technical Contexts: Addressing Bias, Trust, and Interpretability for Responsible Deployment , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 09 (2025): Volume 02 Issue 09
- John M. Davenport, AI-AUGMENTED FRAMEWORKS FOR DATA QUALITY VALIDATION: INTEGRATING RULE-BASED ENGINES, SEMANTIC DEDUPLICATION, AND GOVERNANCE TOOLS FOR ROBUST LARGE-SCALE DATA PIPELINES , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 08 (2025): Volume 02 Issue 08
- Elena Volkova, Emily Smith, INVESTIGATING DATA GENERATION STRATEGIES FOR LEARNING HEURISTIC FUNCTIONS IN CLASSICAL PLANNING , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 04 (2025): Volume 02 Issue 04
- Dr. Elias T. Vance, Prof. Camille A. Lefevre, ENHANCING TRUST AND CLINICAL ADOPTION: A SYSTEMATIC LITERATURE REVIEW OF EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) APPLICATIONS IN HEALTHCARE , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 10 (2025): Volume 02 Issue 10
- Dr. Lucas M. Hoffmann, Dr. Aya El-Masry, ALIGNING EXPLAINABLE AI WITH USER NEEDS: A PROPOSAL FOR A PREFERENCE-AWARE EXPLANATION FUNCTION , International Journal of Advanced Artificial Intelligence Research: Vol. 1 No. 01 (2024): Volume 01 Issue 01
- Severov Arseni Vasilievich, Artyom V. Smirnov, Architecting Real-Time Risk Stratification in the Insurance Sector: A Deep Convolutional and Recurrent Neural Network Framework for Dynamic Predictive Modeling , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 10 (2025): Volume 02 Issue 10
- Dr. Alessia Romano, Prof. Marco Bianchi, DEVELOPING AI ASSISTANCE FOR INCLUSIVE COMMUNICATION IN ITALIAN FORMAL WRITING , International Journal of Advanced Artificial Intelligence Research: Vol. 1 No. 01 (2024): Volume 01 Issue 01
- Dr. Michael Lawson, Dr. Victor Almeida, Securing Deep Neural Networks: A Life-Cycle Perspective On Trojan Attacks And Defensive Measures , International Journal of Advanced Artificial Intelligence Research: Vol. 1 No. 01 (2024): Volume 01 Issue 01
You may also start an advanced similarity search for this article.