Original title: On Task Performance and Model Calibration with Supervised and Self-Ensembled In-Context Learning Authors: Chengzu Li, Han Zhou, Goran Glavaš, Anna Korhonen, Ivan Vulić In this article, the authors explore the issue of overconfidence…
How do social robots behave in group conversation?
Original title: A Study on Social Robot Behavior in Group Conversation Authors: Tung Nguyen, Eric Nichols, Randy Gomez This article explores the impact of robots on group dynamics and conversations. While there has been an…
Read more of How do social robots behave in group conversation?
How can V2X Environmental Perception prevent accidents?
Original title: AccidentGPT: Accident analysis and prevention from V2X Environmental Perception with Multi-modal Large Model Authors: Lening Wang, Han Jiang, Pinlong Cai, Daocheng Fu, Tianqi Wang, Zhiyong Cui, Yilong Ren, Haiyang Yu, Xuesong Wang, Yinhai…
Read more of How can V2X Environmental Perception prevent accidents?
Can Multi-Level Contrastive Learning Improve Natural Language Explanation in VQA?
Original title: Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA Authors: Chengen Lai, Shengli Song, Shiqi Meng, Jingyang Li, Sitong Yan, Guangneng Hu In an article about natural language explanation in…
Read more of Can Multi-Level Contrastive Learning Improve Natural Language Explanation in VQA?
Can TagAlign enhance Vision-Language Alignment?
Original title: TagAlign: Improving Vision-Language Alignment with Multi-Tag Classification Authors: Qinying Liu, Kecheng Zheng, Wu Wei, Zhan Tong, Yu Liu, Wei Chen, Zilei Wang, Yujun Shen In this article, the authors discuss the challenge of…
Read more of Can TagAlign enhance Vision-Language Alignment?
Can Mini-GPTs be Efficient Large Language Models using Contextual Pruning?
Original title: Mini-GPTs: Efficient Large Language Models through Contextual Pruning Authors: Tim Valicenti, Justice Vidal, Ritik Patnaik This article explores the optimization of Large Language Models (LLMs) in the field of artificial intelligence (AI). It…
Read more of Can Mini-GPTs be Efficient Large Language Models using Contextual Pruning?
What is sequential survival process in continuous-time graph representation?
Original title: Continuous-time Graph Representation with Sequential Survival Process Authors: Abdulkadir Celikkanat, Nikolaos Nakis, Morten Mørup The article discusses the growth of representation learning methods for graphs over the past two decades. These methods have…
Read more of What is sequential survival process in continuous-time graph representation?
Can pre-trained image backbones be unlocked for semantic image synthesis?
Original title: Unlocking Pre-trained Image Backbones for Semantic Image Synthesis Authors: Tariq Berrada, Jakob Verbeek, Camille Couprie, Karteek Alahari In the article, the authors discuss the task of semantic image synthesis, which involves generating images…
Read more of Can pre-trained image backbones be unlocked for semantic image synthesis?
How to create realistic rain simulation in CARLA Simulator for LiDARs?
Original title: Realistic Rainy Weather Simulation for LiDARs in CARLA Simulator Authors: Donglin Yang, Zhenfeng Liu, Wentao Jiang, Guohang Yan, Xing Gao, Botian Shi, Si Liu, Xinyu Cai In this article, the authors discuss the…
Read more of How to create realistic rain simulation in CARLA Simulator for LiDARs?
Can PIA create personalized images using plug-and-play modules in text-to-image models?
Original title: PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models Authors: Yiming Zhang, Zhening Xing, Yanhong Zeng, Youqing Fang, Kai Chen In a recent article, researchers have made exciting advancements in personalized…
Read more of Can PIA create personalized images using plug-and-play modules in text-to-image models?