Wei Xu     

[phonetic pronunciation: way shoo ]

Associate Professor
College of Computing
Georgia Institute of Technology

I am a faculty member of the School of Interactive Computing, Machine Learning Center, and NSF AI CARING Institute at Georgia Tech. My research lies at the intersections of machine learning, natural language processing, and social media. I direct the NLP X Lab which currently focus on (1) analysis of large language models, such as cultural bias, multilingual capability, temporal shifts, and personalization; (2) text generation, such as constrained decoding and learnable evaluation metric; (3) NLP applications that can make impact in education, accessibility, etc. I recently received the NSF CRII Award, NSF CAREER Award, Criteo Faculty Research Award, CrowdFlower AI for Everyone Award, Best Paper Award at COLING'18, as well as research funds from DARPA and IARPA. I am a member of NAACL executive board. I was a postdoctoral researcher at the University of Pennsylvania. I received my PhD in Computer Science from New York University, MS and BS from Tsinghua University.

  I'm recruiting 1-3 new PhD students every year (apply to PhD program and list me as a potential advisor). I also advise undergraduate and MS students (who have sufficient time and motivation) for research theses.
What's New
Research Highlights

Controllability, Stylistics, and Evaluation in Text Generation

We recently published one of the earliest works on formally evaluating the impressive text rewriting capability of GPT-3.5 (davinci), in particular, for paraphrase generation [EMNLP’22a] and text simplification [ACL’23a]. Our new LENS metric [ACL’23a] is the first learnt automatic evaluation metric for text simplification, which, when used as objective in minimum Bayes risk decoding (MBR), also set the newest state-of-the-art of open-sourced generation models, on par of GPT-3.5 and GPT-4. We also work on instruction-finetuning for style classification [arXiv’23a], edit-level text generation evaluation [arXiv’23b], document-grounded instructional dialog [ACL’23b], document editing analysis for scientific writing [EMNLP’22b].

Fairness, Multilingual, and Cross-cultural Capability of LLMs

We analyze monolingual and multilingual LLMs for fairness [arXiv’23c], distillation [ACL’23c], cost effectiveness [EMNLP’21], readability assessments [arXiv’23d], and any other strengths/weaknesses that may lead to further development of better, fairer, smaller models. We also develop effective methods, such as label projection [ACL-F’23], for cross-lingual transfer learning. That is, with only English annotated data, we directly train multilingual language models that can perform tasks (e.g., entity recognition, question answer) in non-English languages.

NLP + X (social media, accessibility, privacy) Interdisciplinary Research

We work on a range of interesting and useful applications that aims to improve human life and society. A lot of our research has focused on text simplification [ACL’23a, ACL’23d, EMNLP’21], which simplifies texts and improves readability, making knowledge accessible to all. We also recently started to develop document-grounded instructional dialog for personal assistance (e.g., cooking) [ACL’23b], as part of the larger NSF AI CARING efforts. We also take a great interest in social media data, including work on human-in-the-loop detection of misinformation [ACL’23e] and stance classification towards multilingual multi-cultural misinformation claims [EMNLP’22b]. One of our current ongoing collaborative projects is looking at the privacy protection of users on social media.


    Chao Jiang (PhD student; semantics, structured model)
    Yao Dou (PhD student; generation, LLM evaluation, privacy)
    Tarek Naous (PhD student; fairness, multilingual LLM)
    Duong Minh Le (PhD student; dialog, controllable text generation -- co-advisor: Alan Ritter)
    Yang Chen (PhD student; information extraction, transfer learning -- co-advisor: Alan Ritter)
    Junmo Kang (PhD student; model efficiency -- co-advisor: Alan Ritter)
    Jonathan Zheng (PhD student; robustness of LLM, social media)
    David Heineman (Undergrad, winter 2020 -- ; generation, LLM evaluation)
    Michael Ryan (Undergrad, winter 2020 -- ; multilingual LLM, text simplification)
    Vishnu Suresh (Undergrad, autumn 2021 -- )
    Marcus Ma (MS student, spring 2022 -- ; authorship)
    Vishnesh Jayanthi (Undergrad, summer 2022 -- ; stylistics)
    Rachel Choi (Undergrad, summer 2022 -- )
    Ian Ligon (Undergrad, summer 2022 -- )
    Anton Lavrouk (Undergrad, autumn 2022 -- )
    Vinayak Athavale (Undergrad, autumn 2022 -- )
    Govind Ramesh (Undergrad, winter 2022 -- )
    Nour Allah El Senary (Undergrad, winter 2022 -- )
    Andrew Li (MS student, summer 2023 -- ; dialog, co-advisor: Alan Ritter)
    Mithun Subhash (Undergrad, summer 2023 -- )
    Piranava Abeyakaran (Undergrad, summer 2023 -- )

Current Offering:
Previous Offerings:


I am a NAACL executive board member, a senior area chair for EMNLP 2022 (generation), NAACL 2022 (machine learning for NLP), 2021 (generation), and ACL 2020 (generation), and an area chair for ACL 2023 (semantics), EMNLP 2021 (computational social science), EMNLP 2020 (generation), AAAI 2020 (NLP), ACL 2019 (semantics), NAACL 2019 (generation), EMNLP 2018 (social media), COLING 2018 (semantics), EMNLP 2016 (generation), a workshop chair for ACL 2017, and the publicity chair for EMNLP 2019, NAACL 2018 and 2016. I also created a new undergraduate course on Social Media and Text Analytics.


When I have spare time, I enjoy visiting art museums, hiking, biking, and snowboarding.

I wrote a biography of my phd advisor Ralph Grishman along with some early history of Information Extraction research in 2017.

I also made a list of the best dressed NLP researchers in 2016/17 , 2015 and 2014.