Questo prodotto usufruisce delle SPEDIZIONI GRATIS
selezionando l'opzione Corriere Veloce in fase di ordine.
Pagabile anche con Carta della cultura giovani e del merito, 18App Bonus Cultura e Carta del Docente
This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024.
The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on:
Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI.
Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI.
Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI.
Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.
.- Counterfactual explanations and causality for eXplainable AI.
.- Sub-SpaCE: Subsequence-based Sparse Counterfactual Explanations for Time Series Classification Problems.
.- Human-in-the-loop Personalized Counterfactual Recourse.
.- COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images.
.- Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence.
.- CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests.
.- Causality-Aware Local Interpretable Model-Agnostic Explanations.
.- Evaluating the Faithfulness of Causality in Saliency-based Explanations of Deep Learning Models for Temporal Colour Constancy.
.- CAGE: Causality-Aware Shapley Value for Global Explanations.
.- Fairness, trust, privacy, security, accountability and actionability in eXplainable AI.
.- Exploring the Reliability of SHAP Values in Reinforcement Learning.
.- Categorical Foundation of Explainable AI: A Unifying Theory.
.- Investigating Calibrated Classification Scores through the Lens of Interpretability.
.- XentricAI: A Gesture Sensing Calibration Approach through Explainable and User-Centric AI.
.- Toward Understanding the Disagreement Problem in Neural Network Feature Attribution.
.- ConformaSight: Conformal Prediction-Based Global and Model-Agnostic Explainability Framework.
.- Differential Privacy for Anomaly Detection: Analyzing the Trade-off Between Privacy and Explainability.
.- Blockchain for Ethical & Transparent Generative AI Utilization by Banking & Finance Lawyers.
.- Multi-modal Machine learning model for Interpretable Mobile Malware Classification.
.- Explainable Fraud Detection with Deep Symbolic Classification.
.- Better Luck Next Time: About Robust Recourse in Binary Allocation Problems.
.- Towards Non-Adversarial Algorithmic Recourse.
.- Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring.
.- XAI for Time Series Classification: Evaluating the Benefits of Model Inspection for End-Users.
Il sito utilizza cookie ed altri strumenti di tracciamento che raccolgono informazioni dal dispositivo dell’utente. Oltre ai cookie tecnici ed analitici aggregati, strettamente necessari per il funzionamento di questo sito web, previo consenso dell’utente possono essere installati cookie di profilazione e marketing e cookie dei social media. Cliccando su “Accetto tutti i cookie” saranno attivate tutte le categorie di cookie. Per accettare solo deterninate categorie di cookie, cliccare invece su “Impostazioni cookie”. Chiudendo il banner o continuando a navigare saranno installati solo cookie tecnici. Per maggiori dettagli, consultare la Cookie Policy.