Publications
Please see the full list of her publications in Google Scholar.
Selected Publications
Book Chapters
- N. Cummins, Z. Ren, A. Mallol-Ragolta, and B. Schuller, Artificial Intelligence in Precision Health, ch. Chapter 5 – Machine learning in digital health, recent trends, and ongoing challenges, pp. 121–148. Elsevier, 2020
Journal Articles
Z. Ren, Y. Chang, W. Nejdl, and B. Schuller, “Learning complementary representations via attention-based ensemble learning for cough-based COVID-19 recognition,” Acta Acustica, 2022. 4 pages, to appear
Z. Ren, Y. Chang, K. D. Bartl-Pokorny, F. B. Pokorny, and B. Schuller, “The acoustic dissection of cough: Diving into machine listening-based COVID-19 analysis and detection,” Journal of Voice, 2022. to appear
Y. Chang, X. Jing, Z. Ren, and B. Schuller, “CovNet: A transfer learning framework for automatic COVID-19 detection from crowd-sourced cough sounds,” Frontiers in Digital Health, vol. 3, pp. 1–11, Jan. 2022 [code]
Z. Ren, Q. Kong, J. Han, M. Plumbley, and B. Schuller, “CAA-Net: Conditional atrous CNNs with attention for explainable device-robust acoustic scene classification,” IEEE Transactions on Multimedia, Nov. 2020. 12 pages [code]
F. Dong, K. Qian, Z. Ren, A. Baird, X. Li, Z. Dai, B. Dong, F. Metze, Y. Yamamoto, and B. Schuller, “Machine listening for heart status monitoring: Introducing and benchmarking HSS – the heart sounds Shenzhen corpus,” IEEE Journal of Biomedical and Health Informatics, vol. 24, pp. 2082–2092, Nov. 2019
Z. Ren, K. Qian, Z. Zhang, V. Pandit, A. Baird, and B. Schuller, “Deep scalogram representations for acoustic scene classification,” IEEE/CAA Journal of Automatica Sinica, vol. 5, pp. 662–669, May 2018
Conference Papers
Y. Chang, Z. Ren, T. T. Nguyen, W. Nejdl, and B. Schuller, “Example-based explanations with adversarial attacks for respiratory sound analysis,” in Proc. INTERPSEECH, (Incheon, Korea), 2022. to appear [code]
Z. Ren, T. T. Nguyen, and W. Nejdl, “Prototype learning for interpretable respiratory sound analysis,” in Proc. ICASSP, (Singapore), pp. 9087–9091, 2022 [code]
Z. Ren, J. Han, N. Cummins, and B. Schuller, “Enhancing transferability of black-box adversarial attacks via lifelong learning for speech emotion recognition models,” in Proc. INTERSPEECH, (Shanghai, China), pp. 496–500, 2020
Z. Ren, A. Baird, J. Han, Z. Zhang, and B. Schuller, “Generating and protecting against adversarial attacks for deep speech-based emotion recognition models,” in Proc. ICASSP, (Barcelona, Spain), pp. 7184–7188, 2020 [code]
Z. Ren, J. Han, N. Cummins, Q. Kong, M. Plumbley, and B. Schuller, “Multi-instance learning for bipolar disorder diagnosis using weakly labelled speech data,” in Proc. DPH, (Marseille, France), pp. 79–83, 2019
F. Ringeval, B. Schuller, M. Valstar, N. Cummins, R. Cowie, L. Tavabi, M. Schmitt, S. Alisamir, S. Amiriparian, E.-M. Messner, S. Song, S. Liu, Z. Zhao, A. Mallol-Ragolta, Z. Ren, M. Soleymani, and M. Pantic, “AVEC 2019 workshop and challenge: State-of-mind, detecting depression with AI, and cross-cultural affect recognition,” in Proc. AVEC, (Nice, France), pp. 3–12, 2019
Z. Ren, Q. Kong, J. Han, M. Plumbley, and B. Schuller, “Attention-based atrous convolutional neural networks: Visualisation and understanding perspectives of acoustic scenes,” in Proc. ICASSP, (Brighton, UK), pp. 56–60, 2019 [code]
Z. Ren, Q. Kong, K. Qian, M. Plumbley, and B. Schuller, “Attention-based convolutional neural networks for acoustic scene classification,” in Proc. DCASE, (Surrey, UK), pp. 39–43, 2018
Z. Ren, N. Cummins, J. Han, S. Schnieder, J. Krajewski, and B. Schuller, “Evaluation of the pain level from speech: Introducing a novel pain database and benchmarks,” in Proc. ITG, (Oldenburg, Germany), pp. 56–60, 2018
B. Schuller, S. Steidl, A. Batliner, P. Marschik, H. Baumeister, F. Dong, S. Hantke, F. Pokorny, E.-M. Rathner, K. Bartl-Pokorny, C. Einspieler, D. Zhang, A. Baird, S. Amiriparian, K. Qian, Z. Ren, M. Schmitt, P. Tzirakis, and S. Zafeiriou, “The INTERSPEECH 2018 computational paralinguistics challenge: Atypical & self-assessed affect, crying & heart beats,” in Proc. INTERSPEECH, (Hyderbad, India), pp. 122–126, 2018
Z. Ren, N. Cummins, V. Pandit, J. Han, K. Qian, and B. Schuller, “Learning image-based representations for heart sound classification,” in Proc. DH, (Lyon, France), pp. 143–147, 2018
Z. Ren, V. Pandit, K. Qian, Z. Yang, Z. Zhang, and B. Schuller, “Deep sequential image features on acoustic scene classification,” in Proc. DCASE, (Munich, Germany), pp. 113–117, 2017
Z. Ren, Q. Zhang, H. Zhu, and Q. Wang, “Extending the FOV from disparity and color consistencies in multiview light fields,” in Proc. ICIP, (Beijing, China), pp. 1157–1161, 2017
Patents
Qing Wang, Zhao Ren, Guoqing Zhou, Wang Zeng, “Light field acquisition device based on micro camera array and data processing method (基于微相机阵列的光场采集装置及数据处理方法)”, No. CN106027861A, Oct. 2016.
Qing Wang, Zhe Ji, Shu Han, Chunping Zhang, Zhao Ren, Guoqing Zhou, “An equipment and algorithms for light field camera extrinsic parameters calibration (光场相机外参数标定装置及方法)”, No. CN105654484A, June 2016.