My research lies at the intersections of machine learning, natural language processing, and social media. I focus on designing algorithms for learning semantics from large data for natural language understanding, and generation in particular with stylistic variations. I recently received the NSF CRII Award, Criteo Faculty Research Award, CrowdFlower AI for Everyone Award, Best Paper Award at COLING'18, as well as research funds from DARPA. Previously, I was a postdoctoral researcher at the University of Pennsylvania. I received my PhD in Computer Science from New York University where I was a MacCracken Fellow, MS and BS from Tsinghua University.
I am an area chair for ACL 2019 (semantics area), NAACL 2019 (generation area), EMNLP 2018 (social media area), COLING 2018 (semantics area), EMNLP 2016 (generation area), a workshop chair for ACL 2017, and the publicity chair for EMNLP 2019, NAACL 2018 and 2016. I also created the Twitter API tutorial and a new course on Social Media and Text Analytics.
I am looking for one or two new PhD students every year. Here is a note to prospective students.
We design machine learning algorithms to extract semantic or structured knowledge from large volumes of data. We have a series of work on learning web-scale paraphrases from Twitter that can enable natural language systems to handle errors (e.g.
“everytime” ↔ “every time”), lexical variations (e.g. “oscar nom’d
doc” ↔ “Oscar-nominated documentary”), rare words (e.g “NetsBulls
series” ↔ “Nets and Bulls games”), and language shifts (e.g. “is
bananas” ↔ “is great”). It is difficult to capture such lexically divergent paraphrases by the conventional similarity-based approaches. We design large-scale data [BUCC'13][SemEval'15][EMNLP'17], neural network models for sentence pair modeling [NAACL'18a][COLING'18] and multi-instance learning models [TACL'14][EMNLP'16], which jointly infers latent word-sentence relations.
Natural Language Generation / Stylistics
Many text-to-text generation problems can be thought of as sentential paraphrasing or monolingual machine translation. It faces an exponential search space larger than bilingual translation, but a much smaller optimal solution space due to specific task requirements. I advocate for a text-to-text generation framework, building on top of machine translation technologies. My recent work uncovered multiple serious problems in text simplification [TACL'15] research between 2010 and 2014, designed automatic evaluation metrics to optimize syntax-based machine translation models [TACL'16], and created neural ranking models to achieve new state-of-the-art results for lexical simplification [EMNLP'18]. I am interested in text generation with different language styles (e.g. historic ↔ modern [COLING'12], non-standard ↔ standard [BUCC'13], feminine ↔ masculine [AAAI'16]).
Learning Large-scale Paraphrases for Natural Language Understanding and Generation
May 2018, Facebook, Menlo Park, CA
May 2018, Twitter, San Francisco, CA
Nov 2017, IBM Thomas J. Watson Research Center, New York
How AI Understand Language?
Mar 2018, Women in Analytics Conference (Main-stage Panel)
Can Paraphrase be a Ultimate
Solution for NLU and NLG?
July 2017, Google Research, New York, NY
Paraphrase ≈ Monolingual Translation
Aug 2016, Amazon, Berlin, Germany
Multiple-instance Learning from Unlimited Text
Dec 2016, Microsoft Research Asia, Beijing, China
Sep 2016, University of Delaware, Newark, DE
May 2016, University of Edinburgh, Edinburgh, United Kingdom
Apr 2016, Ohio State University, Columbus, OH
Apr 2016, University of North Carolina, Chapel Hill, NC
Mar 2016, Arizona State University, Tempe, AZ
Mar 2016, Vanderbilt University, Nashville, TN
Mar 2016, Imperial College London, London, United Kingdom
Mar 2016, University of Waterloo, Waterloo, ON, Canada (CS Seminar)
Feb 2016, Indiana University, Bloomington, IN (Computer Science Colloquium Series)
Feb 2016, Washington University, St Louis, MI (Computer Science & Engineering Colloquia Series)
Feb 2016, Simon Fraser University, Vancouver, BC, Canada
Feb 2016, University of Alberta, Edmonton, AB , Canada (Special Lecture)
Feb 2016, Yale University, New Haven, CT (CS Talk)
Oct 2015, University of Maryland, College Park, MD (CLIP Colloquium)
Oct 2015, Ohio State University, Columbus, OH (Clippers Seminar)
Large-scale Paraphrase Acquisition from Twitter
May 2015, DARPA DEFT PI Meeting, Boulder, CO
Learning and Generating Paraphrases from Twitter and Beyond [poster]
Apr 2015, Carnegie Mellon University, Pittsburgh, PA
Apr 2015, Columbia University, New York, NY (NLP Talk)
Feb 2015, Johns Hopkins University, Baltimore, MD (CLIP Colloquium)
Paraphrases in Twitter [slides]
Feb 2015, Twitter, San Francisco, CA
Modeling Lexically Divergent Paraphrases in Twitter (and
Shakespeare!) [poster] Mar 2015, The City University of New York, New York, NY (NLP Seminar)
Feb 2015, IBM Research - Almaden, San Jose, CA
Feb 2015, UC Berkeley, Berkeley, CA
Feb 2015, UT Austin, Austin, TX (Forum for Artificial Intelligence)
Dec 2014, Yahoo! Research, New York, NY
Nov 2014, Carnegie Mellon
University, Pittsburgh, PA (CL+NLP Lunch Seminar)
Aug 2014, Microsoft Research,
Redmond, WA (Visiting Speaker Series)
Incremental Information Extraction
Apr 2012, Stanford Research Institute, Palo Alto, CA
May 2011, IARPA's
KDD PI Meeting, San Diego, CA
Information Extraction Research
Jan 2011, University of Washington,
Nov 2009, Thomson Reuters, Eagan,
Mar 2007, France Telecom, Beijing,
When I have spare time, I enjoy art, visiting museums, swimming and snowboarding.