Chris Kedzie
I am very excited to announce that I have successfully defended my dissertation in December 2020 and have joined where I will continue doing natural language generation research in the context of conversational assistants!
(About)
And you may ask yourself, well, how did I get here?
Salutations! I was a PhD student (2014-2020) in the
Department of Computer Science at
Columbia University,
where I worked in the
Natural Language
Processing (NLP) group with my advisor
Kathy McKeown and many other
wonderful colleagues.
Before that I worked as a composer’s assistant and
even before that, I studied classical guitar and music theory/composition at
Loyola Marymount University.
My advisor and I were recently very fortunate to receive a best paper
award at INLG 2019.
(Research)
And you may ask yourself, "How do I work
this?"
I am interested in computational models of natural language generation and
understanding. I am actively investigating methods for improving neural
network-based language models via interaction with secondary models of
semantics and/or syntactic structure. Currently, I am exploring various
cooperative learning schemes where a semantic parser is used to validate the
outputs of a learned neural language generation model, not only at test time,
but also as a teacher providing noisey supervision during training.
Some of my research interests include:
- Text Generation: Deep Neural Network (DNN) models of text generation, paraphrase, and summarization.
- Faithful/Controllable Generation: Conditional generation of natural language from formal meaning representations/semantics/data, with an emphasis on ensuring the correctness of the generated text with respect to model inputs.
- Inductive Bias in Text Generation Tasks: Understanding when humans find text or data interesting, salient, or otherwise remarkable, and building models to do the same in the context of document/data summarization.
I also apply machine learning to natural language data to make useful predictions, such as predicting the importance of text for summarization or extracting signals from the web for social scientists.
Selected Publications
For a complete list please see my Google Scholar profile.
Controllable Meaning Representation to Text Generation: Linearization and Data Augmentation Strategies
in Proceedings of Empirical Methods in Natural Language Processing. 2020.
Incorporating Terminology Constraints in Automatic Post-Editing
in Proceedings of the Fifth Conference on Machine Translation. 2020.
A Good Sample is Hard to Find: Noise Injection Sampling and Self-Training for Neural Language Generation Models
in Proceedings of the 12th International Conference on Natural Language Generation. 2019. (Best Paper Award)
Low-Level Linguistic Controls for Style Transfer and Content Preservation
in Proceedings of the 12th International Conference on Natural Language Generation. 2019.
Content Selection in Deep Learning Models of Summarization
in Proceedings of Empirical Methods in Natural Language Processing. 2018.
Real-Time Web Scale Event Summarization Using Sequential Decision Making
in Proceedings of the International Joint Conference on Artificial Intelligence. 2016.
Predicting Salient Updates for Disaster Summarization
in Proceedings of the 53nd Annual Meeting of the Association for Computational Linguistics. 2015.
Multimodal social media analysis for gang violence prevention
in Proceedings of the International AAAI Conference on Web and Social Media. 2019.
Detecting Gang-Involved Escalation on Social Media Using Context
in Proceedings of Empirical Methods in Natural Language Processing. 2018.