Wei Xu     

[phonetic pronunciation: way shoo ]

Assistant Professor
School of Interactive Computing
Georgia Institute of Technology
  wei.xu@cc.gatech.edu
  @cocoweixu      @cocoxu

My research lies at the intersections of machine learning, natural language processing, and social media. I focus on designing algorithms for learning semantics from large data for natural language understanding, and natural language generation in particular with stylistic variations. I recently received the NSF CRII Award, Criteo Faculty Research Award, CrowdFlower AI for Everyone Award, Best Paper Award at COLING'18, as well as research funds from DARPA. Before joining Georgia Tech, I was a tenure-track assistant professor in the Department of Computer Science and Engineering at the Ohio State University since 2016. I was a postdoctoral researcher at the University of Pennsylvania. I received my PhD in Computer Science from New York University where I was a MacCracken Fellow, MS and BS from Tsinghua University.

I am a senior area chair for NAACL 2021 and ACL 2020 (generation), and an area chair for EMNLP 2020 (generation), AAAI 2020 (NLP), ACL 2019 (semantics), NAACL 2019 (generation), EMNLP 2018 (social media), COLING 2018 (semantics), EMNLP 2016 (generation), a workshop chair for ACL 2017, and the publicity chair for EMNLP 2019, NAACL 2018 and 2016. I also created a new course on Social Media and Text Analytics.

  I am looking for one or a few new PhD students every year (more info). I am also hiring a postdoc.
What's New
  Sep 11 - talk at Emory University, CS Department Seminar, "Understanding & Generating Human Language"
  Oct 15 - talk at USC/ISI NL Seminar, "Natural Language Understanding for Noisy Text"
  Oct 27 - talk at University of Pittsburgh, NLP Seminar, "Automatic Text Simplification for K-12 Students"
  Oct 29 - talk at University of Sheffield, NLP Seminar, "Natural Language Understanding for Noisy Text"
  Oct 30 - talk at Google "Natural Language Generation for Social Good"
  Nov 4 - talk at University of Delaware, ECE Department Seminar, "Understanding & Generating Human Language"
  Nov 13 - talk at CMU, LTI Colloquium, "Importance of Data and Linguistics in Neural Language Generation"
  Nov 19 - organizing EMNLP Workshop on Noisy User-generated Text
Teaching
Spring 2021 -- CS 4650 Natural Language Processing (more information coming soon!)

Previous Classes:

Research Highlights

Natural Language Generation / Stylistics

Many text-to-text generation problems can be thought of as sentential paraphrasing or monolingual machine translation. It faces an exponential search space larger than bilingual translation, but a much smaller optimal solution space due to specific task requirements. I am interested in a variety of generation problems, including text simplification, style transfer, paraphrase generation, and error correction. My work uncovered multiple serious problems in previous research (from 2010 to 2014) on text simplification [TACL'15] , designed a new tunable metric SARI [TACL'16] which is effective for evaluation and as a learning objective for training (now added by the Google AI group to TensorFlow), optimized syntax-based machine translation models [TACL'16], created pairwise neural ranking models to for lexical simplification [EMNLP'18], and studied document-level simplification [AAAI'20]. Our newest Transformer-based model initialized with BERT is the current state-of-the-art for automatic text simplification [ACL'20a]. I am interested in text generation for style transfer [COLING'12] and stylistics in general (e.g. historic ↔ modern, non-standard ↔ standard [BUCC'13], feminine ↔ masculine [AAAI'16]).

Natural Language Understanding / Semantics

My approach to natural language understanding is learning and modeling paraphrases on a much larger scale and with a much broader range than previous work, essentially by developing more robust machine learning models and leveraging social media data. These paraphrase can enable natural language systems to handle errors (e.g., “everytime” ↔ “every time”), lexical variations (e.g., “oscar nom’d doc” ↔ “Oscar-nominated documentary”), rare words (e.g “NetsBulls series” ↔ “Nets and Bulls games”), and language shifts (e.g. “is bananas” ↔ “is great”). We designed a series of unsupervised and supervised learning approaches for paraphrase identification in social media data (also applicable to question/answer pairs for QA systems), ranging from neural network models [COLING'18][NAACL'18a] to multi-instance learning [TACL'14][EMNLP'16], and crowdsourcing large-scale datasets [SemEval'15][EMNLP'17].

Noisy User-generated Data / Social Media

For AI to truly understand human language and help people (e.g., instructing a robot), we ought to study the language people actually use in their daily life (e.g., posting on social media), besides the formally written texts that are well supported by existing NLP software. I thus focus on specially designed learning algorithms and the data for training these algorithms to develop tools to process and analyze noisy user-generated data. I have worked a lot with Twitter data [EMNLP'19][EMNLP'17][EMNLP'16] [TACL'14], given its importance and large scale coverage. Social media also contains very diverse languages for studying stylistics and semantics, carrying information that is important for both people’s everyday lives and national security. In the past three years, with my students, I have expanded my scope to cover a wider range of user-generated data, including biology lab protocols [NAACL'18b], GitHub, and StackOverflow [ACL'20b].

Publications
Students
Current Students:
    Mounica Maddela (PhD student @GaTech, 2017 -- ; generation/neural ranking model ACL'19 EMNLP'18)
    Chao Jiang (PhD student @GaTech, 2018 -- ; semantics ACL'20a NAACL'18)
    Wuwei Lan (PhD student @OSU, 2016 -- ; semantics COLING'18 NAACL'18a EMNLP'17 )
    Jeniya Tabassum (PhD student @OSU, 2016 -- ; social media/IE ACL'20b EMNLP'16 - co-advisor: Alan Ritter)
    Yang Zhong (PhD student @OSU, 2019 -- ; stylistics AAAI'20 AAAI'19)
    Jonathan Zheng (Undergrad @GaTech, autumn 2020 -- ; misinformation)

PhD Thesis Committee:
    Maria Pershina (PhD @NYU, 2014; information extraction ACL'14 - advisor: Ralph Grishman → Bloomberg)
    Kai Cao (PhD @NYU, 2017; information extraction - advisor: Ralph Grishman)
    Sanqiang Zhao (PhD candidate @UPitt; text simplification - advisor: Daqing He)

Former Student Advisees:
    Jim Chen (Undergraduate @UPenn; crowdsourcing HCOMP'14 TACL'16 → PhD @University of Washington)
    Ray Lei (Undergraduate @UPenn; crowdsourcing HCOMP'14 → Microsoft)
    Wenchao Du (Undergraduate @UWaterloo; dialog AAAI'17 SAP → Masters @CMU LTI - advisor: Pascal Poupart)
    Mingkun Gao (Masters student @UPenn; crowdsourcing/machine translation NAACL'15 → PhD student @UIUC)
    Siyu Qiu (Masters student @UPenn; semantics EMNLP'17 → Hulu)
    Chaitanya Kulkarni (PhD student @OSU; biology protocols NAACL'18b - advisor: Raghu Machiraju)
    Piyush Ghai (Masters student @OSU; semantics → Amazon)
    Sydney Lee (Undergraduate @OSU; data annotation WNUT'20 → Capital One)
    Brian Seeds (Undergrad @OSU; graphical user interface)
    Daniel Szoke (Undergraduate @OSU; offensive language)
    Sam Stevens (Undergraduate @OSU; scientific writing)
    Sarah Flanagan (Undergrad @OSU; data annotation)
    Kenneth Koepcke (Undergrad @OSU → @UIUC)
    Panya Bhinder (High school intern @OSU, summer 2020)
    Solomon Wood (High school intern @OSU, spring 2020)

Professional Service
Workshop Chair:   ACL (2017)
Area Chair/Senior Area Chair:   ACL (2020, 2019), EMNLP (2020, 2018, 2016), AAAI (2020), NAACL (2021, 2019), COLING (2018)
Publicity Chair:   EMNLP (2019), NAACL (2018, 2016)
Organizer:
     - Workshop on Noisy User-generated Text (W-NUT) at ACL 2015, COLING 2016, EMNLP 2017, 2018, 2019, 2020
     - SemEval 2015 shared-task: Paraphrases and Semantic Similarity in Twitter
     - 2016 Mid-Atlantic Student Colloquium on Speech, Language and Learning
Program Committee:
     ACL (2018, 2017, 2015, 2014, 2013), NAACL (2018, 2015), EMNLP (2017, 2016, 2015, 2014), COLING (2016, 2014)
     WWW (2016, 2015), AAAI (2016, 2015, 2012), KDD (2015)
Journal Reviewer:
     Transactions of the Association for Computational Linguistics (TACL)
     Journal of Artificial Intelligence Research (JAIR)

Invited Talks
Miscellaneous

When I have spare time, I enjoy visiting art museums, swimming, running, and snowboarding.

I wrote a biography of my phd advisor Ralph Grishman along with some early history of Information Extraction research in 2017.

I also made a list of the best dressed NLP researchers in 2016/17 , 2015 and 2014.