麻豆新传媒黄ios|果冻传媒之艳母|日日撸夜夜撸|国产高清内射视频|91制片厂 在线播放|免费视频污|麻豆视传媒app黄网站免费|国产免费91av最新版本|糖心vlog官网现在时间|麻豆在视频传媒app网站入口,甜美的惩罚,麻豆女传媒演员被抓是谁,麻豆传媒新年贺岁片贴吧

From Multilingual to Multimodal Processing
發(fā)布時間:2019-12-26    

講座主題

From Multilingual to Multimodal Processing

主講人姓名及介紹

Chenhui Chu received his B.S. in Software Engineering from Chongqing University in 2008, and M.S., and Ph.D. in Informatics from Kyoto University in 2012 and 2015, respectively. He is currently a research assistant professor at Osaka University. His research won the MSRA collaborative research 2019 grant award, 2018 AAMT Nagao award, and CICLing 2014 best student paper award. He is on the editorial board of the Journal of Natural Language Processing, Journal of Information Processing, and a steering committee member of Young Researcher Association for NLP Studies. His research interests center on natural language processing, particularly machine translation and language and vision understanding.

報告摘要

In this talk, I will introduce three of our recent work coving from multilingual to multimodal processing. The first work is about how to exploit multilingualism for low-resource neural machine translation. The second work is for identifying visual grounded paraphrases from image and language multimodal data. The last work explores knowledge for visual question answering in videos. Through the talk, I would like to discuss the research challenges and opportunities in multilingual and multimodal processing.

學術(shù)講座