Doctoral Forum in Information Systems 2026
Doctoral Forum in Information Systems 2026
24
Apr
The USJ Doctoral School and the Institute for Data Engineering and Science are organising a Doctoral Forum in the study/research area of Information Systems on 24 April 2026.
The USJ Doctoral School and the Institute for Data Engineering and Science are organising a Doctoral Forum in the study/research area of Information Systems on 24 April 2026.
Programme Details:
Date: Friday, 24 April 2026
Time: 19:00 – 22:00
Location: Laboratory of Applied Neurosciences (LAN) at USJ (5/F, Academic Building); and Conference Room, 2/F, Residential Hall Building, USJ Ilha Verde Campus
Language: English
Organised by: Doctoral School and Institute for Data Engineering and Science
Meeting ID: 889 2386 5667
Passcode: 226296
*Free event, open to the general public
Programme:
- 19:00 | Laboratory presentation(5/F, Academic Building)
The Laboratory of Applied Neurosciences (LAN)
PhD Candidate: Ivy Zeng
Abstract: The Laboratory of Applied Neurosciences (LAN/USJ) at the Faculty of Business and Law provides a cutting-edge, multimodal infrastructure for interdisciplinary research. Bridging the gap between neuroscience and fields such as marketing, economics, and leadership, the LAN enables the study of human cognitive and emotional systems. This presentation and demonstration will highlight how the lab’s resources are available to support diverse research projects, fostering a collaborative, science-driven approach to complex professional and academic challenges.
- 19:40 | Break
- 19:35 | Doctoral talk (Online; Conference Room in Residential Hall)
Intelligent Organizational Ecosystems
Invited Speaker: Professor José Henrique Mamede
Abstract: Intelligent organizational ecosystems are emerging as a new paradigm for trustworthy digital transformation, in which data, processes, enterprise architecture, artificial intelligence, automation, and cybersecurity are conceived as interdependent dimensions of the same socio-technical system. Rather than viewing digital transformation as the incremental adoption of isolated technologies, this perspective understands organisations as dynamic ecosystems that must integrate human and artificial agents, interoperable data structures, adaptive processes, and resilient digital infrastructures in order to sustain competitiveness, relevance, and long-term organisational resilience. This research agenda proposes a systematic approach to the modelling, design, and validation of intelligent, secure, and data-driven organisational ecosystems, capable of supporting decision-making, governance, value creation, and organisational adaptability in complex and evolving environments. The current research is structured around four main axes: conceptual and architectural modelling of intelligent ecosystems; intelligent automation and adaptive enterprise architectures; cybersecurity as a systemic organisational property; and the articulation between digital transformation, human capabilities, and advanced training. The agenda contributes to the advancement of Information Systems research by offering an integrated scientific framework for trustworthy digital transformation, connecting data governance, artificial intelligence, process automation, security, and organisational capabilities within a coherent and operational vision. In doing so, it aims to produce not only theoretical advances, but also reusable artefacts, frameworks, and evaluation instruments with practical relevance for public and private organisations undergoing digital transformation.
Keywords: Information Systems, Intelligent Organizational Ecosystems, Enterprise Architecture, Digital Transformation, Cybersecurity

- 20:30 | Refreshments Break
- 21:00 | Presentation
Automated Depression Detection via Text, Audio and Multimodal Modalities: A Systematic Literature Review of Methodologies
PhD Candidate: Chi Man Chao
Abstract: Major Depressive Disorder (MDD) represents a leading global cause of disability and psychological distress, affecting more than 280 million individuals worldwide. Traditional clinical diagnosis is plagued by inherent subjectivity, constrained healthcare resources, and delayed intervention initiation, whereas artificial intelligence and affective computing-enabled automated depression detection (ADD) has emerged as a scalable, objective supplementary approach for early screening and longitudinal mental health monitoring. This systematic review focuses on text-based, audio-based, and multimodal depression detection to encompass high-impact research across all relevant disciplinary fields. This study systematically extracts and analyzes the methodological frameworks adopted by these influential scholars to address core domain challenges, including data scarcity, class imbalance, environmental noise interference, cross-domain generalization deficits, and limited clinical interpretability. Findings reveal a unified three-stage evolutionary trajectory across all modalities: traditional machine learning reliant on handcrafted feature engineering, end-to-end deep learning for automatic representation learning, and advanced transformer-based self-supervised learning coupled with structured multimodal fusion architectures. This review clarifies state-of-the-art technical pathways, summarizes shared methodological paradigms and persistent bottlenecks, and provides a rigorous academic reference for future investigations in affective computing and artificial intelligence-driven mental health assessment.
21:30 | Presentation
Audio-Text Fusion for Depression Detection with Multi-Feature Augmentation
PhD Candidate: Kin Tong Yuen
Abstract: Depression affects over 280 million people worldwide, and speech analysis is a promising signal for automated screening. Achieving useful accuracy while preserving interpretability remains challenging, especially when limited labelled clinical data and opaque models are used. This paper presents a reproducible, signal-processing-grounded evaluation of multimodal depression detection on EATD-Corpus, with external checks on MODMA and DAIC-WOZ. The pipeline combines handcrafted acoustic features and wav2vec2 embeddings with modality-attention fusion. It standardises feature/model selection, subject-disjoint 5-fold cross-validation, and controlled augmentation (waveform perturbation, time/frequency masking, token dropout/shuffle). Across cross-validation, the strongest fusion is MFCC+mel-NetVLAD with BERT; MFCC with BERT or RoBERTa is also competitive, and wav2vec2/wavelet remain viable baselines. Augmentation effects are selective: the best fusion reaches a mean F1 of 0.8427 in a targeted augmentation sweep (primary preset: 0.8401), and text-only BERT reaches 0.8235, with gains varying by feature–encoder pairing and perturbation level. External checks indicate domain-shift calibration challenges, partially mitigated by threshold tuning. Main contributions include a transparent, reproducible pipeline, transfer-robustness diagnostics, and a simple calibration protocol that improves external performance without changing the model architecture.



