Samenvatting
With the exponential growth of data generated daily from social media, e-commerce,
and various digital interactions, the necessity to effectively harness and leverage this
vast expanse of information is more critical than ever. In this context, Deep Learning
(DL), a subfield of Artificial Intelligence (AI), has emerged as a transformative
force, delivering unparalleled capacities in pattern recognition, data analysis, and
predictive modeling. Deep learning takes large amounts of available data as fuel
to train itself, and significantly impacts various fields ranging from healthcare
to finance, enabling advanced applications in natural language processing (NLP),
computer vision (CV), and recommender systems (RS).
This thesis delves into the essential role of AI in leveraging big data, focusing on
information extraction from social media, deep learning model explainability, and
the development of explainable recommender systems. With the vast, ever-growing
volume of data, extracting meaningful insights from unstructured social media be-
comes increasingly complex, necessitating cutting-edge AI solutions. Concurrently,
the reliance on deep learning models for critical decisions brings explainability
to the forefront, emphasizing the importance of developing transparent methods
that ensure user trust. Furthermore, the demand for recommender systems that
provide understandable textual explanations has surged, highlighting the need for
explainable systems that align with user preferences and decision-making processes.
This thesis advances the field through three key contributions. Initially, we
establish two traffic-related datasets from social media, annotated for comprehensive
traffic event detection. Employing BERT-based models, we tackle this detection
problem via text classification and slot filling, proving these models’ efficacy in
parsing social media for traffic-related information. Our second contribution intro-
duces LRP-based methods to explain deep conditional random fields, with successful
applications in fake news detection and image segmentation. Lastly, we present an
innovative personalized explainable recommender system that integrates user and
item context into a language model, producing textual explanations that enhance
system transparency.
and various digital interactions, the necessity to effectively harness and leverage this
vast expanse of information is more critical than ever. In this context, Deep Learning
(DL), a subfield of Artificial Intelligence (AI), has emerged as a transformative
force, delivering unparalleled capacities in pattern recognition, data analysis, and
predictive modeling. Deep learning takes large amounts of available data as fuel
to train itself, and significantly impacts various fields ranging from healthcare
to finance, enabling advanced applications in natural language processing (NLP),
computer vision (CV), and recommender systems (RS).
This thesis delves into the essential role of AI in leveraging big data, focusing on
information extraction from social media, deep learning model explainability, and
the development of explainable recommender systems. With the vast, ever-growing
volume of data, extracting meaningful insights from unstructured social media be-
comes increasingly complex, necessitating cutting-edge AI solutions. Concurrently,
the reliance on deep learning models for critical decisions brings explainability
to the forefront, emphasizing the importance of developing transparent methods
that ensure user trust. Furthermore, the demand for recommender systems that
provide understandable textual explanations has surged, highlighting the need for
explainable systems that align with user preferences and decision-making processes.
This thesis advances the field through three key contributions. Initially, we
establish two traffic-related datasets from social media, annotated for comprehensive
traffic event detection. Employing BERT-based models, we tackle this detection
problem via text classification and slot filling, proving these models’ efficacy in
parsing social media for traffic-related information. Our second contribution intro-
duces LRP-based methods to explain deep conditional random fields, with successful
applications in fake news detection and image segmentation. Lastly, we present an
innovative personalized explainable recommender system that integrates user and
item context into a language model, producing textual explanations that enhance
system transparency.
Originele taal-2 | English |
---|---|
Toekennende instantie |
|
Begeleider(s)/adviseur |
|
Datum van toekenning | 15 okt 2024 |
Status | Published - 2024 |