Wei Xu     

[phonetic pronunciation: way shoo ]

Associate Professor
College of Computing
Georgia Institute of Technology
  wei.xu@cc.gatech.edu
  @cocoweixu

I am a faculty member of the School of Interactive Computing and Machine Learning Center at Georgia Tech. My research lies at the intersections of machine learning, natural language processing, and social media. I direct the NLP X Lab which currently focuses on (1) large language models, such as cultural bias, multilingual capability, temporal shifts, and personalization; (2) text generation, such as constrained decoding and learnable evaluation metric; and (3) interdisciplinary NLP applications that can make impact in education, accessibility, etc. I received the NSF CAREER Award, Google Academic Research Award, Criteo Faculty Research Award, CrowdFlower AI for Everyone Award, Best Paper Awards at COLING'18 and ACL'24, as well as research funds from DARPA and IARPA. I am a member of NAACL executive board. I was a postdoctoral researcher at the University of Pennsylvania. I received my PhD in Computer Science from New York University, MS and BS from Tsinghua University.

  I'm recruiting 1-2 PhD students every year (apply to Machine Learning or CS PhD program and list me as a potential advisor; if you have EE background, consider also apply to ML ECE program). I recruit MS students (apply to MSCS program and email me) and undergraduates who have sufficient time and motivation for research theses.
What's New
  Oct 2024, upcoming talk at Bloomberg's CTO Data Science Speaker Series
  Oct 2024, upcoming talk at Stony Brook University, New York
Research Highlights

Controllability, Stylistics, and Evaluation in Text Generation

We recently published one of the earliest works on formally evaluating the impressive text rewriting capability of GPT-3.5 and GPT-4, in particular, for paraphrase generation [EMNLP’22a] and text simplification [EMNLP’23a]. Our new LENS metric [ACL’23a] is the first learned automatic evaluation metric for text simplification, which, when used as objective in minimum Bayes risk decoding (MBR), also set the newest state-of-the-art of open-sourced generation models, on par of GPT-3.5 and GPT-4. We also work on instruction-finetuning for style [ACL’24a], edit-level text generation evaluation [EMNLP’23a], document-grounded instructional dialog [ACL’23b], document editing analysis for scientific writing [EMNLP’22b].

Fairness, Multilingual, and Cross-cultural Capability of LLMs

We analyze monolingual and multilingual LLMs for cultural bias [ACL’24b], distillation [ACL’23c], cost efficiency [EMNLP’21], robustness [ACL’24d], and any other strengths/weaknesses that may lead to further development of better, fairer, smaller models. We also develop effective methods, such as label projection [ICLR’24], for cross-lingual transfer learning. That is, with only English annotated data, we directly train multilingual language models that can perform tasks (e.g., entity recognition, question answer) in non-English languages.

NLP + X (social media, accessibility, privacy) Interdisciplinary Research

We work on a range of interesting and useful applications that aim to improve human life and society. A lot of our research has focused on text simplification [ACL’23a, ACL’23d, EMNLP’21], which simplifies texts and improves readability, making knowledge accessible to all. We also recently started to develop document-grounded instructional dialog for personal assistance (e.g., cooking) [ACL’23b], as part of the larger NSF AI CARING efforts. We also take a great interest in social media data, including work on human-in-the-loop detection of misinformation [ACL’23e] and stance classification towards multilingual multi-cultural misinformation claims [EMNLP’22b]. One of our current ongoing collaborative projects is looking at the privacy protection of users on social media.

NLP X Lab
    Yao Dou (CS PhD student; generation, LLM evaluation, privacy)
    Tarek Naous (ECE/ML PhD; multilingual LLM, fairness)
    Duong Minh Le (CS PhD; dialog, controllable text generation -- co-advisor: Alan Ritter)
    Jonathan Zheng (ML PhD; robustness of LLM, social media -- co-advisor: Alan Ritter)
    Chao Jiang (CS PhD; semantics, structured model)
    Geyang Guo (CS PhD; LLM alignment)
    Junmo Kang (CS PhD; model efficiency -- co-advisor: Alan Ritter)
    Anton Lavrouk (MS, autumn 2022 -- ; multilingual LLM analysis)
    Xiaofeng Wu (MS, autumn 2023 -- ; LLM subcharacter)
    Jeongrok Yu (MS, winter 2023 -- ; chatbot)
    Vishnesh Jayanthi (Undergrad, summer 2022 -- ; stylistics)
    Rachel Choi (Undergrad, summer 2022 -- )
    Ian Ligon (Undergrad, summer 2022 -- )
    Govind Ramesh (Undergrad, winter 2022 -- )
    Nour Allah El Senary (Undergrad, winter 2022 -- )
    Suraj Mehrotra (Undergrad, spring 2024 -- )
    Joseph Thomas (Undergrad, summer 2024 -- )
    Oleksandr Lavreniuk (Undergrad, summer 2024 -- )
    Siwan Yang (Undergrad, autumn 2024 -- )
    Julius Broomfield (Undergrad, autumn 2024 -- )
    Jad Matthew Bardawil (Undergrad, autumn 2024 -- )

Preprints
Publications
Teaching
Current Offering:
Previous Offerings:

Service

I am a NAACL executive board member, a senior area chair for EMNLP 2024 (resource and evaluation), 2022 (generation), NAACL 2022 (machine learning for NLP), 2021 (generation), and ACL 2020 (generation), and an area chair for COLM 2024, ACL 2023 (semantics), EMNLP 2021 (computational social science), EMNLP 2020 (generation), AAAI 2020 (NLP), ACL 2019 (semantics), NAACL 2019 (generation), EMNLP 2018 (social media), COLING 2018 (semantics), EMNLP 2016 (generation), a workshop chair for ACL 2017, and the publicity chair for EMNLP 2019, NAACL 2018 and 2016. I also created a new undergraduate course on Social Media and Text Analytics.

Miscellaneous

When I have spare time, I enjoy visiting art museums, hiking, biking, and snowboarding.

I wrote a biography of my phd advisor Ralph Grishman along with some early history of Information Extraction research in 2017.

I also made a list of the best dressed NLP researchers in 2016/17 , 2015 and 2014.