[
  {
    "path": ".gitignore",
    "content": ".DS_Store\n\npages/\n*.fdb_latexmk\n*.bbl\n*.aux\n*.out\n*.toc\n*.fls\n*.blg\n*.log\n*.lot\n*.lof\n*.synctex.gz\n"
  },
  {
    "path": "Makefile",
    "content": "thesis.pdf: $(wildcard *.tex) $(wildcard chapters/natlog/*.tex)  $(wildcard chapters/naturalli/*.tex) $(wildcard chapters/openie/*.tex) $(wildcard chapters/qa/*.tex)  Makefile macros.tex std-macros.tex ref.bib\n\t@pdflatex thesis\n\t@bibtex thesis\n\t@pdflatex thesis\n\t@pdflatex thesis\n\nclean:\n\trm -f *.aux *.log *.bbl *.blg present.pdf *.bak *.ps *.dvi *.lot *.bcf thesis.pdf\n\ndist: thesis.pdf\n\t@pdflatex --file-line-errors thesis\n\ndefault: thesis.pdf\n"
  },
  {
    "path": "README.md",
    "content": "## Danqi Chen's Thesis\n\n### Reference\n\n```\n@phdthesis{chen2018neural,\n  title={Neural Reading Comprehension and Beyond},\n  author={Chen, Danqi},\n  year={2018},\n  school={Stanford University}\n}\n```\n\n### Acknowledgement\n\nThis thesis is built on top of [Gabor Angeli's thesis template](https://github.com/gangeli/thesis).\n\n### Contact\n\nIf you have any comments or questions about the thesis, please use pull requests or email <danqi@cs.stanford.edu>.\n"
  },
  {
    "path": "ack.tex",
    "content": "%!TEX root = thesis.tex\n\n\\prefacesection{Acknowledgments}\n\nThe past six years at Stanford have been an unforgettable and invaluable experience to me. When I first started my PhD in 2012, I could barely speak fluent English (I was required to take five English courses at Stanford), knew little about this country and had never heard of the term ``natural language processing''.  It is unbelievable that over the following years I have actually been doing research about language and training computer systems to understand human languages (English in most cases), as well as training myself to speak and write in English. At the same time, 2012 is the year that deep neural networks (also called deep learning) started to take off and dominate almost all the AI applications we are seeing today. I witnessed how fast Artificial Intelligence has been developing from the beginning of the journey and feel quite excited —-- and occasionally panicked —-- to be a part of this trend. I would not have been able to make this journey without the help and support of many, many people and I feel deeply indebted to them.\n\nFirst and foremost, my greatest thanks go to my advisor Christopher Manning. I really didn't know Chris when I first came to Stanford --- only after a couple of years that I worked with him and learned about NLP, did I realize how privileged I am to get to work with one of the most brilliant minds in our field. He always has a very insightful, high-level view about the field while he is also uncommonly detail oriented and understands the nature of the problems very well. More importantly, Chris is an extremely kind, caring and supportive advisor that I could not have asked for more. He is like an older friend of mine (if he doesn't mind me saying so) and I can talk with him about everything. He always believes in me even though I am not always that confident about myself. I am forever grateful to him and I have already started to miss him.\n\nI would like to thank Dan Jurafsky and Percy Liang --- the other two giants of the Stanford NLP group --- for being on my thesis committee and for a lot of guidance and help throughout my PhD studies. Dan is an extremely charming, enthusiastic and knowledgeable person and I always feel my passion getting ignited after talking to him. Percy is a superman and a role model for all the NLP PhD students (at least myself). I never understand how one can accomplish so many things at the same time and a big part of this dissertation is built on top of his research. I want to thank Chris, Dan and Percy, for setting up the Stanford NLP Group, my home at Stanford, and I will always be proud to be a part of this family.\n\nIt is also my great honor to have Luke Zettlemoyer on my thesis committee. The work presented in this dissertation is very relevant to his research and I learned a lot from his papers. I look forward to working with him in the near future. I also would like to thank Yinyu Ye for his time chairing my thesis defense.\n\nDuring my PhD, I have done two wonderful internships at Microsoft Research and Facebook AI Research. I thank my mentors Kristina Toutanova, Antoine Bordes and Jason Weston when I worked at these places.  My internship project at Facebook eventually leads to the \\sys{DrQA} project and a part of this dissertation. I also would like to thank Microsoft and Facebook for providing me with fellowships.\n\nCollaboration is a big lesson that I learned, and also a fun part of graduate school. I thank my fellow collaborators: Gabor Angeli, Jason Bolton, Arun Chaganty, Adam Fisch, Jon Gauthier, Shayne Longpre, Jesse Mu, Siva Reddy, Richard Socher, Yuhao Zhang, Victor Zhong, and others. In particular, Richard --- with him I finished my first paper in graduate school. He had very clear sense about how to define an impactful research project while I had little experience at the time. Adam and Siva --- with them I finished the \\sys{DrQA} and \\sys{CoQA} projects respectively. Not only am I proud of these two projects, but also I greatly enjoyed the collaborations. We have become good friends afterwards. The KBP team, especially Yuhao, Gabor and Arun --- I enjoyed the teamwork during those two summers. Jon, Victor, Shayne and Jesse, the younger people that I got to work with, although I wish I could have done a better job. I also want to thank the two teaching teams (7 and 25 people respectively) for the NLP class that I've worked on and that was a very unique and rewarding experience for me.\n\nI thank the whole Stanford NLP Group, especially Sida Wang, Will Monroe, Angel Chang, Gabor Angeli, Siva Reddy, Arun Chaganty, Yuhao Zhang, Peng Qi, Jacob Steinhardt, Jiwei Li, He He, Robin Jia and Ziang Xie, who gave me a lot of support at various times. I am even not sure if there could be another research group in the world better than our group (I hope I can create a similar one in the future). The NLP retreat, NLP BBQ and those paper swap nights were among my most vivid memories in graduate school.\n\nOutside of the NLP group, I have been extremely lucky to be surrounded by many great friends. Just to name a few (and forgive me for not being able to list all of them): Yanting Zhao, my close friend for many years, who keeps pulling me out from my stressful PhD life, and I share a lot of joyous moments with her. Xueqing Liu, my classmate and roommate in college who started her PhD at UIUC in the same year and she is the person that I can keep talking to and exchanging my feelings and thoughts with, especially on those bad days. Tao Lei, a brilliant NLP PhD and my algorithms ``teacher'' in high school and I keep learning from him and getting inspired from every discussion. Thanh-Vy Hua, my mentor and ``elder sister'' who always makes sure that I am still on the right track of my life and taught me many meta-skills to survive this journey (even though we have only met 3 times in the real world). Everyone in the ``\\pinyin{cao3yu2}'' group, I am so happy that I have spent many Friday evenings with you.\n\nDuring the past year, I visited a great number of U.S. universities seeking an academic job position. There are so many people I want to thank for assistance along the way —-- I either received great help and advice from them, or I felt extremely welcomed during my visit —-- including Sanjeev Arora, Yoav Artzi, Regina Barzilay, Chris Callison-Burch, Kai-Wei Chang, Kyunghyun Cho, William Cohen, Michael Collins, Chris Dyer, Jacob Eisenstein, Julia Hirschberg, Julia Hockenmaier, Tengyu Ma, Andrew McCallum, Kathy McKeown, Rada Mihalcea, Tom Mitchell, Ray Mooney, Karthik Narasimhan, Graham Neubig, Christos Papadimitriou, Nanyun Peng, Drago Radev, Sasha Rush, Fei Sha, Yulia Tsvetkov, Luke Zettlemoyer and many others. These people are really a big part of the reasons that I love our research community so much, therefore I want to follow their paths and dedicate myself to an academic career. I hope to continue to contribute to our research community in the future.\n\nA special thanks to Andrew Chi-Chih Yao for creating the Special Pilot CS Class where I did my undergraduate studies. I am super proud of being a part of the ``Yao class'' family. I also thank Weizhu Chen, Qiang Yang and Haixun Wang, with them I received my very first research experience. With their support, I was very fortunate to have the opportunity to come to Stanford for my PhD.\n\nI thank my parents: Zhi Chen and Hongmei Wang. Like most Chinese students in my generation, I am the only child of my family and I have a very close relationship with them --- even if they are living 16 (or 15) hours ahead of me and I can only spare 2--3 weeks staying with them every year. My parents made me who I am today and I never know how to pay them back. I hope that they are at least a little proud of me for what I have been through so far.\n\nLastly, I would like to thank Huacheng for his love and support (we got married 4 months before this dissertation was submitted). I was fifteen when I first met Huacheng and we have been experiencing almost everything together since then: from high-school programming competitions, to our wonderful college time at Tsinghua University and we both made it to the Stanford CS PhD program in 2012. For over ten years in the past, he is not only my partner, my classmate, my best friend, but also the person I admire most, for his modesty, intelligence, concentration and hard work.  Without him, I would not have come to Stanford. Without him, I would also not have taken the job at Princeton. I thank him for everything he has done for me.\n\n\\newpage\n\n\\begin{flushright}\nTo my parents and Huacheng, for their unconditional love.\n\\end{flushright}\n"
  },
  {
    "path": "acl_natbib_nourl.bst",
    "content": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\n% BibTeX style file acl_natbib_nourl.bst\n%\n% intended as input to urlbst script\n%\n% adapted from compling.bst \n% in order to mimic the style files for ACL conferences prior to 2017\n% by making the following three changes:\n% - for @incollection, page numbers now follow volume title.\n% - for @inproceedings, address now follows conference name.\n%\t(address is intended as location of conference,\n%\t not address of publisher.)\n% - for papers with three authors, use et al. in citation\n% Dan Gildea 2017/06/08\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\n% BibTeX style file compling.bst\n%\n% Intended for the journal Computational Linguistics (ACL/MIT Press)\n% Created by Ron Artstein on 2005/08/22 \n% For use with <natbib.sty> for author-year citations.\n%\n% I created this file in order to allow submissions to the journal \n% Computational Linguistics using the <natbib> package for author-year\n% citations, which offers a lot more flexibility than <fullname>, CL's \n% official citation package. This file adheres strictly to the official\n% style guide available from the MIT Press:\n%\n% http://mitpress.mit.edu/journals/coli/compling_style.pdf\n%\n% This includes all the various quirks of the style guide, for example: \n% - a chapter from a monograph (@inbook) has no page numbers.\n% - an article from an edited volume (@incollection) has page numbers\n%   after the publisher and address.\n% - an article from a proceedings volume (@inproceedings) has page \n%   numbers before the publisher and address.\n%\n% Where the style guide was inconsistent or not specific enough I \n% looked at actual published articles and exercised my own judgment. \n% I noticed two inconsistencies in the style guide:\n%\n% - The style guide gives one example of an article from an edited \n%   volume with the editor's name spelled out in full, and another \n%   with the editors' names abbreviated. I chose to accept the first \n%   one as correct, since the style guide generally shuns abbreviations, \n%   and editors' names are also spelled out in some recently published \n%   articles.\n%\n% - The style guide gives one example of a reference where the word \n%   \"and\" between two authors is preceded by a comma. This is most \n%   likely a typo, since in all other cases with just two authors or \n%   editors there is no comma before the word \"and\".\n%\n% One case where the style guide is not being specific is the placement\n% of the edition number, for which no example is given. I chose to put \n% it immediately after the title, which I (subjectively) find natural,\n% and is also the place of the edition in a few recently published\n% articles.\n%\n% This file correctly reproduces all of the examples in the official\n% style guide, except for the two inconsistencies noted above. I even\n% managed to get it to correctly format the proceedings example which \n% has an organization, a publisher, and two addresses (the conference \n% location and the publisher's address), though I cheated a bit by \n% putting the conference location and month as part of the title field; \n% I feel that in this case the conference location and month can be\n% considered as part of the title, and that adding a location field \n% is not justified. Note also that a location field is not standard, \n% so entries made with this field would not port nicely to other styles. \n% However, if authors feel that there's a need for a location field \n% then tell me and I'll see what I can do.\n%\n% The file also produces to my satisfaction all the bibliographical \n% entries in my recent (joint) submission to CL (this was the original \n% motivation for creating the file). I also tested it by running it\n% on a larger set of entries and eyeballing the results. There may of\n% course still be errors, especially with combinations of fields that\n% are not that common, or with cross-references (which I seldom use). \n% If you find such errors please write to me. \n% \n% I hope people find this file useful. Please email me with comments \n% and suggestions.\n% \n% Ron Artstein\n% artstein [at] essex.ac.uk\n% August 22, 2005.\n%\n% Some technical notes.\n%\n% This file is based on a file generated with the package <custom-bib> \n% by Patrick W. Daly (see selected options below), which was then \n% manually customized to conform with certain CL requirements which\n% cannot be met by <custom-bib>. Departures from the generated file \n% include:\n%\n% Function inbook: moved publisher and address to the end; moved \n% edition after title; replaced function format.chapter.pages by \n% new function format.chapter to output chapter without pages.\n% \n% Function inproceedings: moved publisher and address to the end;\n% replaced function format.in.ed.booktitle by new function \n% format.in.booktitle to output the proceedings title without \n% the editor.\n% \n% Functions book, incollection, manual: moved edition after title.\n%\n% Function mastersthesis: formatted title as for articles (unlike \n% phdthesis which is formatted as book) and added month.\n% \n% Function proceedings: added new.sentence between organization and \n% publisher when both are present.\n% \n% Function format.lab.names: modified so that it gives all the\n% authors' surnames for in-text citations for one, two and three\n% authors and only uses \"et. al\" for works with four authors or more\n% (thanks to Ken Shan for convincing me to go through the trouble of \n% modifying this function rather than using unreliable hacks).\n%\n% Changes: \n%\n% 2006-10-27: Changed function reverse.pass so that the extra label is\n% enclosed in parentheses when the year field ends in an uppercase or\n% lowercase letter (change modeled after Uli Sauerland's modification\n% of nals.bst). RA.\n%\n%\n% The preamble of the generated file begins below:\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%\n%% This is file `compling.bst',\n%% generated with the docstrip utility.\n%%\n%% The original source files were:\n%%\n%% merlin.mbs  (with options: `ay,nat,vonx,nm-revv1,jnrlst,keyxyr,blkyear,dt-beg,yr-per,note-yr,num-xser,pre-pub,xedn,nfss')\n%% ----------------------------------------\n%% *** Intended for the journal Computational Linguistics ***\n%% \n%% Copyright 1994-2002 Patrick W Daly\n % ===============================================================\n % IMPORTANT NOTICE:\n % This bibliographic style (bst) file has been generated from one or\n % more master bibliographic style (mbs) files, listed above.\n %\n % This generated file can be redistributed and/or modified under the terms\n % of the LaTeX Project Public License Distributed from CTAN\n % archives in directory macros/latex/base/lppl.txt; either\n % version 1 of the License, or any later version.\n % ===============================================================\n % Name and version information of the main mbs file:\n % \\ProvidesFile{merlin.mbs}[2002/10/21 4.05 (PWD, AO, DPC)]\n %   For use with BibTeX version 0.99a or later\n %-------------------------------------------------------------------\n % This bibliography style file is intended for texts in ENGLISH\n % This is an author-year citation style bibliography. As such, it is\n % non-standard LaTeX, and requires a special package file to function properly.\n % Such a package is    natbib.sty   by Patrick W. Daly\n % The form of the \\bibitem entries is\n %   \\bibitem[Jones et al.(1990)]{key}...\n %   \\bibitem[Jones et al.(1990)Jones, Baker, and Smith]{key}...\n % The essential feature is that the label (the part in brackets) consists\n % of the author names, as they should appear in the citation, with the year\n % in parentheses following. There must be no space before the opening\n % parenthesis!\n % With natbib v5.3, a full list of authors may also follow the year.\n % In natbib.sty, it is possible to define the type of enclosures that is\n % really wanted (brackets or parentheses), but in either case, there must\n % be parentheses in the label.\n % The \\cite command functions as follows:\n %   \\citet{key} ==>>                Jones et al. (1990)\n %   \\citet*{key} ==>>               Jones, Baker, and Smith (1990)\n %   \\citep{key} ==>>                (Jones et al., 1990)\n %   \\citep*{key} ==>>               (Jones, Baker, and Smith, 1990)\n %   \\citep[chap. 2]{key} ==>>       (Jones et al., 1990, chap. 2)\n %   \\citep[e.g.][]{key} ==>>        (e.g. Jones et al., 1990)\n %   \\citep[e.g.][p. 32]{key} ==>>   (e.g. Jones et al., p. 32)\n %   \\citeauthor{key} ==>>           Jones et al.\n %   \\citeauthor*{key} ==>>          Jones, Baker, and Smith\n %   \\citeyear{key} ==>>             1990\n %---------------------------------------------------------------------\n\nENTRY\n  { address\n    author\n    booktitle\n    chapter\n    edition\n    editor\n    howpublished\n    institution\n    journal\n    key\n    month\n    note\n    number\n    organization\n    pages\n    publisher\n    school\n    series\n    title\n    type\n    volume\n    year\n  }\n  {}\n  { label extra.label sort.label short.list }\nINTEGERS { output.state before.all mid.sentence after.sentence after.block }\nFUNCTION {init.state.consts}\n{ #0 'before.all :=\n  #1 'mid.sentence :=\n  #2 'after.sentence :=\n  #3 'after.block :=\n}\nSTRINGS { s t}\nFUNCTION {output.nonnull}\n{ 's :=\n  output.state mid.sentence =\n    { \", \" * write$ }\n    { output.state after.block =\n        { add.period$ write$\n          newline$\n          \"\\newblock \" write$\n        }\n        { output.state before.all =\n            'write$\n            { add.period$ \" \" * write$ }\n          if$\n        }\n      if$\n      mid.sentence 'output.state :=\n    }\n  if$\n  s\n}\nFUNCTION {output}\n{ duplicate$ empty$\n    'pop$\n    'output.nonnull\n  if$\n}\nFUNCTION {output.check}\n{ 't :=\n  duplicate$ empty$\n    { pop$ \"empty \" t * \" in \" * cite$ * warning$ }\n    'output.nonnull\n  if$\n}\nFUNCTION {fin.entry}\n{ add.period$\n  write$\n  newline$\n}\n\nFUNCTION {new.block}\n{ output.state before.all =\n    'skip$\n    { after.block 'output.state := }\n  if$\n}\nFUNCTION {new.sentence}\n{ output.state after.block =\n    'skip$\n    { output.state before.all =\n        'skip$\n        { after.sentence 'output.state := }\n      if$\n    }\n  if$\n}\nFUNCTION {add.blank}\n{  \" \" * before.all 'output.state :=\n}\n\nFUNCTION {date.block}\n{\n  new.block\n}\n\nFUNCTION {not}\n{   { #0 }\n    { #1 }\n  if$\n}\nFUNCTION {and}\n{   'skip$\n    { pop$ #0 }\n  if$\n}\nFUNCTION {or}\n{   { pop$ #1 }\n    'skip$\n  if$\n}\nFUNCTION {new.block.checkb}\n{ empty$\n  swap$ empty$\n  and\n    'skip$\n    'new.block\n  if$\n}\nFUNCTION {field.or.null}\n{ duplicate$ empty$\n    { pop$ \"\" }\n    'skip$\n  if$\n}\nFUNCTION {emphasize}\n{ duplicate$ empty$\n    { pop$ \"\" }\n    { \"\\emph{\" swap$ * \"}\" * }\n  if$\n}\nFUNCTION {tie.or.space.prefix}\n{ duplicate$ text.length$ #3 <\n    { \"~\" }\n    { \" \" }\n  if$\n  swap$\n}\n\nFUNCTION {capitalize}\n{ \"u\" change.case$ \"t\" change.case$ }\n\nFUNCTION {space.word}\n{ \" \" swap$ * \" \" * }\n % Here are the language-specific definitions for explicit words.\n % Each function has a name bbl.xxx where xxx is the English word.\n % The language selected here is ENGLISH\nFUNCTION {bbl.and}\n{ \"and\"}\n\nFUNCTION {bbl.etal}\n{ \"et~al.\" }\n\nFUNCTION {bbl.editors}\n{ \"editors\" }\n\nFUNCTION {bbl.editor}\n{ \"editor\" }\n\nFUNCTION {bbl.edby}\n{ \"edited by\" }\n\nFUNCTION {bbl.edition}\n{ \"edition\" }\n\nFUNCTION {bbl.volume}\n{ \"volume\" }\n\nFUNCTION {bbl.of}\n{ \"of\" }\n\nFUNCTION {bbl.number}\n{ \"number\" }\n\nFUNCTION {bbl.nr}\n{ \"no.\" }\n\nFUNCTION {bbl.in}\n{ \"in\" }\n\nFUNCTION {bbl.pages}\n{ \"pages\" }\n\nFUNCTION {bbl.page}\n{ \"page\" }\n\nFUNCTION {bbl.chapter}\n{ \"chapter\" }\n\nFUNCTION {bbl.techrep}\n{ \"Technical Report\" }\n\nFUNCTION {bbl.mthesis}\n{ \"Master's thesis\" }\n\nFUNCTION {bbl.phdthesis}\n{ \"Ph.D. thesis\" }\n\nMACRO {jan} {\"January\"}\n\nMACRO {feb} {\"February\"}\n\nMACRO {mar} {\"March\"}\n\nMACRO {apr} {\"April\"}\n\nMACRO {may} {\"May\"}\n\nMACRO {jun} {\"June\"}\n\nMACRO {jul} {\"July\"}\n\nMACRO {aug} {\"August\"}\n\nMACRO {sep} {\"September\"}\n\nMACRO {oct} {\"October\"}\n\nMACRO {nov} {\"November\"}\n\nMACRO {dec} {\"December\"}\n\nMACRO {acmcs} {\"ACM Computing Surveys\"}\n\nMACRO {acta} {\"Acta Informatica\"}\n\nMACRO {cacm} {\"Communications of the ACM\"}\n\nMACRO {ibmjrd} {\"IBM Journal of Research and Development\"}\n\nMACRO {ibmsj} {\"IBM Systems Journal\"}\n\nMACRO {ieeese} {\"IEEE Transactions on Software Engineering\"}\n\nMACRO {ieeetc} {\"IEEE Transactions on Computers\"}\n\nMACRO {ieeetcad}\n {\"IEEE Transactions on Computer-Aided Design of Integrated Circuits\"}\n\nMACRO {ipl} {\"Information Processing Letters\"}\n\nMACRO {jacm} {\"Journal of the ACM\"}\n\nMACRO {jcss} {\"Journal of Computer and System Sciences\"}\n\nMACRO {scp} {\"Science of Computer Programming\"}\n\nMACRO {sicomp} {\"SIAM Journal on Computing\"}\n\nMACRO {tocs} {\"ACM Transactions on Computer Systems\"}\n\nMACRO {tods} {\"ACM Transactions on Database Systems\"}\n\nMACRO {tog} {\"ACM Transactions on Graphics\"}\n\nMACRO {toms} {\"ACM Transactions on Mathematical Software\"}\n\nMACRO {toois} {\"ACM Transactions on Office Information Systems\"}\n\nMACRO {toplas} {\"ACM Transactions on Programming Languages and Systems\"}\n\nMACRO {tcs} {\"Theoretical Computer Science\"}\nFUNCTION {bibinfo.check}\n{ swap$\n  duplicate$ missing$\n    {\n      pop$ pop$\n      \"\"\n    }\n    { duplicate$ empty$\n        {\n          swap$ pop$\n        }\n        { swap$\n          pop$\n        }\n      if$\n    }\n  if$\n}\nFUNCTION {bibinfo.warn}\n{ swap$\n  duplicate$ missing$\n    {\n      swap$ \"missing \" swap$ * \" in \" * cite$ * warning$ pop$\n      \"\"\n    }\n    { duplicate$ empty$\n        {\n          swap$ \"empty \" swap$ * \" in \" * cite$ * warning$\n        }\n        { swap$\n          pop$\n        }\n      if$\n    }\n  if$\n}\nSTRINGS  { bibinfo}\nINTEGERS { nameptr namesleft numnames }\n\nFUNCTION {format.names}\n{ 'bibinfo :=\n  duplicate$ empty$ 'skip$ {\n  's :=\n  \"\" 't :=\n  #1 'nameptr :=\n  s num.names$ 'numnames :=\n  numnames 'namesleft :=\n    { namesleft #0 > }\n    { s nameptr\n      duplicate$ #1 >\n        { \"{ff~}{vv~}{ll}{, jj}\" }\n        { \"{ff~}{vv~}{ll}{, jj}\" }\t% first name first for first author \n%        { \"{vv~}{ll}{, ff}{, jj}\" }\t% last name first for first author\n      if$\n      format.name$\n      bibinfo bibinfo.check\n      't :=\n      nameptr #1 >\n        {\n          namesleft #1 >\n            { \", \" * t * }\n            {\n              numnames #2 >\n                { \",\" * }\n                'skip$\n              if$\n              s nameptr \"{ll}\" format.name$ duplicate$ \"others\" =\n                { 't := }\n                { pop$ }\n              if$\n              t \"others\" =\n                {\n                  \" \" * bbl.etal *\n                }\n                {\n                  bbl.and\n                  space.word * t *\n                }\n              if$\n            }\n          if$\n        }\n        't\n      if$\n      nameptr #1 + 'nameptr :=\n      namesleft #1 - 'namesleft :=\n    }\n  while$\n  } if$\n}\nFUNCTION {format.names.ed}\n{\n  'bibinfo :=\n  duplicate$ empty$ 'skip$ {\n  's :=\n  \"\" 't :=\n  #1 'nameptr :=\n  s num.names$ 'numnames :=\n  numnames 'namesleft :=\n    { namesleft #0 > }\n    { s nameptr\n      \"{ff~}{vv~}{ll}{, jj}\"\n      format.name$\n      bibinfo bibinfo.check\n      't :=\n      nameptr #1 >\n        {\n          namesleft #1 >\n            { \", \" * t * }\n            {\n              numnames #2 >\n                { \",\" * }\n                'skip$\n              if$\n              s nameptr \"{ll}\" format.name$ duplicate$ \"others\" =\n                { 't := }\n                { pop$ }\n              if$\n              t \"others\" =\n                {\n\n                  \" \" * bbl.etal *\n                }\n                {\n                  bbl.and\n                  space.word * t *\n                }\n              if$\n            }\n          if$\n        }\n        't\n      if$\n      nameptr #1 + 'nameptr :=\n      namesleft #1 - 'namesleft :=\n    }\n  while$\n  } if$\n}\nFUNCTION {format.key}\n{ empty$\n    { key field.or.null }\n    { \"\" }\n  if$\n}\n\nFUNCTION {format.authors}\n{ author \"author\" format.names\n}\nFUNCTION {get.bbl.editor}\n{ editor num.names$ #1 > 'bbl.editors 'bbl.editor if$ }\n\nFUNCTION {format.editors}\n{ editor \"editor\" format.names duplicate$ empty$ 'skip$\n    {\n      \",\" *\n      \" \" *\n      get.bbl.editor\n      *\n    }\n  if$\n}\nFUNCTION {format.note}\n{\n note empty$\n    { \"\" }\n    { note #1 #1 substring$\n      duplicate$ \"{\" =\n        'skip$\n        { output.state mid.sentence =\n          { \"l\" }\n          { \"u\" }\n        if$\n        change.case$\n        }\n      if$\n      note #2 global.max$ substring$ * \"note\" bibinfo.check\n    }\n  if$\n}\n\nFUNCTION {format.title}\n{ title\n  duplicate$ empty$ 'skip$\n    { \"t\" change.case$ }\n  if$\n  \"title\" bibinfo.check\n}\nFUNCTION {format.full.names}\n{'s :=\n \"\" 't :=\n  #1 'nameptr :=\n  s num.names$ 'numnames :=\n  numnames 'namesleft :=\n    { namesleft #0 > }\n    { s nameptr\n      \"{vv~}{ll}\" format.name$\n      't :=\n      nameptr #1 >\n        {\n          namesleft #1 >\n            { \", \" * t * }\n            {\n              s nameptr \"{ll}\" format.name$ duplicate$ \"others\" =\n                { 't := }\n                { pop$ }\n              if$\n              t \"others\" =\n                {\n                  \" \" * bbl.etal *\n                }\n                {\n                  numnames #2 >\n                    { \",\" * }\n                    'skip$\n                  if$\n                  bbl.and\n                  space.word * t *\n                }\n              if$\n            }\n          if$\n        }\n        't\n      if$\n      nameptr #1 + 'nameptr :=\n      namesleft #1 - 'namesleft :=\n    }\n  while$\n}\n\nFUNCTION {author.editor.key.full}\n{ author empty$\n    { editor empty$\n        { key empty$\n            { cite$ #1 #3 substring$ }\n            'key\n          if$\n        }\n        { editor format.full.names }\n      if$\n    }\n    { author format.full.names }\n  if$\n}\n\nFUNCTION {author.key.full}\n{ author empty$\n    { key empty$\n         { cite$ #1 #3 substring$ }\n          'key\n      if$\n    }\n    { author format.full.names }\n  if$\n}\n\nFUNCTION {editor.key.full}\n{ editor empty$\n    { key empty$\n         { cite$ #1 #3 substring$ }\n          'key\n      if$\n    }\n    { editor format.full.names }\n  if$\n}\n\nFUNCTION {make.full.names}\n{ type$ \"book\" =\n  type$ \"inbook\" =\n  or\n    'author.editor.key.full\n    { type$ \"proceedings\" =\n        'editor.key.full\n        'author.key.full\n      if$\n    }\n  if$\n}\n\nFUNCTION {output.bibitem}\n{ newline$\n  \"\\bibitem[{\" write$\n  label write$\n  \")\" make.full.names duplicate$ short.list =\n     { pop$ }\n     { * }\n   if$\n  \"}]{\" * write$\n  cite$ write$\n  \"}\" write$\n  newline$\n  \"\"\n  before.all 'output.state :=\n}\n\nFUNCTION {n.dashify}\n{\n  't :=\n  \"\"\n    { t empty$ not }\n    { t #1 #1 substring$ \"-\" =\n        { t #1 #2 substring$ \"--\" = not\n            { \"--\" *\n              t #2 global.max$ substring$ 't :=\n            }\n            {   { t #1 #1 substring$ \"-\" = }\n                { \"-\" *\n                  t #2 global.max$ substring$ 't :=\n                }\n              while$\n            }\n          if$\n        }\n        { t #1 #1 substring$ *\n          t #2 global.max$ substring$ 't :=\n        }\n      if$\n    }\n  while$\n}\n\nFUNCTION {word.in}\n{ bbl.in capitalize\n  \" \" * }\n\nFUNCTION {format.date}\n{ year \"year\" bibinfo.check duplicate$ empty$\n    {\n    }\n    'skip$\n  if$\n  extra.label *\n  before.all 'output.state :=\n  after.sentence 'output.state :=\n}\nFUNCTION {format.btitle}\n{ title \"title\" bibinfo.check\n  duplicate$ empty$ 'skip$\n    {\n      emphasize\n    }\n  if$\n}\nFUNCTION {either.or.check}\n{ empty$\n    'pop$\n    { \"can't use both \" swap$ * \" fields in \" * cite$ * warning$ }\n  if$\n}\nFUNCTION {format.bvolume}\n{ volume empty$\n    { \"\" }\n    { bbl.volume volume tie.or.space.prefix\n      \"volume\" bibinfo.check * *\n      series \"series\" bibinfo.check\n      duplicate$ empty$ 'pop$\n        { swap$ bbl.of space.word * swap$\n          emphasize * }\n      if$\n      \"volume and number\" number either.or.check\n    }\n  if$\n}\nFUNCTION {format.number.series}\n{ volume empty$\n    { number empty$\n        { series field.or.null }\n        { series empty$\n            { number \"number\" bibinfo.check }\n        { output.state mid.sentence =\n            { bbl.number }\n            { bbl.number capitalize }\n          if$\n          number tie.or.space.prefix \"number\" bibinfo.check * *\n          bbl.in space.word *\n          series \"series\" bibinfo.check *\n        }\n      if$\n    }\n      if$\n    }\n    { \"\" }\n  if$\n}\n\nFUNCTION {format.edition}\n{ edition duplicate$ empty$ 'skip$\n    {\n      output.state mid.sentence =\n        { \"l\" }\n        { \"t\" }\n      if$ change.case$\n      \"edition\" bibinfo.check\n      \" \" * bbl.edition *\n    }\n  if$\n}\nINTEGERS { multiresult }\nFUNCTION {multi.page.check}\n{ 't :=\n  #0 'multiresult :=\n    { multiresult not\n      t empty$ not\n      and\n    }\n    { t #1 #1 substring$\n      duplicate$ \"-\" =\n      swap$ duplicate$ \",\" =\n      swap$ \"+\" =\n      or or\n        { #1 'multiresult := }\n        { t #2 global.max$ substring$ 't := }\n      if$\n    }\n  while$\n  multiresult\n}\nFUNCTION {format.pages}\n{ pages duplicate$ empty$ 'skip$\n    { duplicate$ multi.page.check\n        {\n          bbl.pages swap$\n          n.dashify\n        }\n        {\n          bbl.page swap$\n        }\n      if$\n      tie.or.space.prefix\n      \"pages\" bibinfo.check\n      * *\n    }\n  if$\n}\nFUNCTION {format.journal.pages}\n{ pages duplicate$ empty$ 'pop$\n    { swap$ duplicate$ empty$\n        { pop$ pop$ format.pages }\n        {\n          \":\" *\n          swap$\n          n.dashify\n          \"pages\" bibinfo.check\n          *\n        }\n      if$\n    }\n  if$\n}\nFUNCTION {format.vol.num.pages}\n{ volume field.or.null\n  duplicate$ empty$ 'skip$\n    {\n      \"volume\" bibinfo.check\n    }\n  if$\n  number \"number\" bibinfo.check duplicate$ empty$ 'skip$\n    {\n      swap$ duplicate$ empty$\n        { \"there's a number but no volume in \" cite$ * warning$ }\n        'skip$\n      if$\n      swap$\n      \"(\" swap$ * \")\" *\n    }\n  if$ *\n  format.journal.pages\n}\n\nFUNCTION {format.chapter}\n{ chapter empty$\n    'skip$\n    { type empty$\n        { bbl.chapter }\n        { type \"l\" change.case$\n          \"type\" bibinfo.check\n        }\n      if$\n      chapter tie.or.space.prefix\n      \"chapter\" bibinfo.check\n      * *\n    }\n  if$\n}\n\nFUNCTION {format.chapter.pages}\n{ chapter empty$\n    'format.pages\n    { type empty$\n        { bbl.chapter }\n        { type \"l\" change.case$\n          \"type\" bibinfo.check\n        }\n      if$\n      chapter tie.or.space.prefix\n      \"chapter\" bibinfo.check\n      * *\n      pages empty$\n        'skip$\n        { \", \" * format.pages * }\n      if$\n    }\n  if$\n}\n\nFUNCTION {format.booktitle}\n{\n  booktitle \"booktitle\" bibinfo.check\n  emphasize\n}\nFUNCTION {format.in.booktitle}\n{ format.booktitle duplicate$ empty$ 'skip$\n    {\n      word.in swap$ *\n    }\n  if$\n}\nFUNCTION {format.in.ed.booktitle}\n{ format.booktitle duplicate$ empty$ 'skip$\n    {\n      editor \"editor\" format.names.ed duplicate$ empty$ 'pop$\n        {\n          \",\" *\n          \" \" *\n          get.bbl.editor\n          \", \" *\n          * swap$\n          * }\n      if$\n      word.in swap$ *\n    }\n  if$\n}\nFUNCTION {format.thesis.type}\n{ type duplicate$ empty$\n    'pop$\n    { swap$ pop$\n      \"t\" change.case$ \"type\" bibinfo.check\n    }\n  if$\n}\nFUNCTION {format.tr.number}\n{ number \"number\" bibinfo.check\n  type duplicate$ empty$\n    { pop$ bbl.techrep }\n    'skip$\n  if$\n  \"type\" bibinfo.check\n  swap$ duplicate$ empty$\n    { pop$ \"t\" change.case$ }\n    { tie.or.space.prefix * * }\n  if$\n}\nFUNCTION {format.article.crossref}\n{\n  word.in\n  \" \\cite{\" * crossref * \"}\" *\n}\nFUNCTION {format.book.crossref}\n{ volume duplicate$ empty$\n    { \"empty volume in \" cite$ * \"'s crossref of \" * crossref * warning$\n      pop$ word.in\n    }\n    { bbl.volume\n      capitalize\n      swap$ tie.or.space.prefix \"volume\" bibinfo.check * * bbl.of space.word *\n    }\n  if$\n  \" \\cite{\" * crossref * \"}\" *\n}\nFUNCTION {format.incoll.inproc.crossref}\n{\n  word.in\n  \" \\cite{\" * crossref * \"}\" *\n}\nFUNCTION {format.org.or.pub}\n{ 't :=\n  \"\"\n  address empty$ t empty$ and\n    'skip$\n    {\n      t empty$\n        { address \"address\" bibinfo.check *\n        }\n        { t *\n          address empty$\n            'skip$\n            { \", \" * address \"address\" bibinfo.check * }\n          if$\n        }\n      if$\n    }\n  if$\n}\nFUNCTION {format.publisher.address}\n{ publisher \"publisher\" bibinfo.warn format.org.or.pub\n}\n\nFUNCTION {format.organization.address}\n{ organization \"organization\" bibinfo.check format.org.or.pub\n}\n\nFUNCTION {article}\n{ output.bibitem\n  format.authors \"author\" output.check\n  author format.key output\n  format.date \"year\" output.check\n  date.block\n  format.title \"title\" output.check\n  new.block\n  crossref missing$\n    {\n      journal\n      \"journal\" bibinfo.check\n      emphasize\n      \"journal\" output.check\n      format.vol.num.pages output\n    }\n    { format.article.crossref output.nonnull\n      format.pages output\n    }\n  if$\n  new.block\n  format.note output\n  fin.entry\n}\nFUNCTION {book}\n{ output.bibitem\n  author empty$\n    { format.editors \"author and editor\" output.check\n      editor format.key output\n    }\n    { format.authors output.nonnull\n      crossref missing$\n        { \"author and editor\" editor either.or.check }\n        'skip$\n      if$\n    }\n  if$\n  format.date \"year\" output.check\n  date.block\n  format.btitle \"title\" output.check\n  format.edition output\n  crossref missing$\n    { format.bvolume output\n      new.block\n      format.number.series output\n      new.sentence\n      format.publisher.address output\n    }\n    {\n      new.block\n      format.book.crossref output.nonnull\n    }\n  if$\n  new.block\n  format.note output\n  fin.entry\n}\nFUNCTION {booklet}\n{ output.bibitem\n  format.authors output\n  author format.key output\n  format.date \"year\" output.check\n  date.block\n  format.title \"title\" output.check\n  new.block\n  howpublished \"howpublished\" bibinfo.check output\n  address \"address\" bibinfo.check output\n  new.block\n  format.note output\n  fin.entry\n}\n\nFUNCTION {inbook}\n{ output.bibitem\n  author empty$\n    { format.editors \"author and editor\" output.check\n      editor format.key output\n    }\n    { format.authors output.nonnull\n      crossref missing$\n        { \"author and editor\" editor either.or.check }\n        'skip$\n      if$\n    }\n  if$\n  format.date \"year\" output.check\n  date.block\n  format.btitle \"title\" output.check\n  format.edition output\n  crossref missing$\n    {\n      format.bvolume output\n      format.number.series output\n      format.chapter \"chapter\" output.check\n      new.sentence\n      format.publisher.address output\n      new.block\n    }\n    {\n      format.chapter \"chapter\" output.check\n      new.block\n      format.book.crossref output.nonnull\n    }\n  if$\n  new.block\n  format.note output\n  fin.entry\n}\n\nFUNCTION {incollection}\n{ output.bibitem\n  format.authors \"author\" output.check\n  author format.key output\n  format.date \"year\" output.check\n  date.block\n  format.title \"title\" output.check\n  new.block\n  crossref missing$\n    { format.in.ed.booktitle \"booktitle\" output.check\n      format.edition output\n      format.bvolume output\n      format.number.series output\n      format.chapter.pages output\n      new.sentence\n      format.publisher.address output\n    }\n    { format.incoll.inproc.crossref output.nonnull\n      format.chapter.pages output\n    }\n  if$\n  new.block\n  format.note output\n  fin.entry\n}\nFUNCTION {inproceedings}\n{ output.bibitem\n  format.authors \"author\" output.check\n  author format.key output\n  format.date \"year\" output.check\n  date.block\n  format.title \"title\" output.check\n  new.block\n  crossref missing$\n    { format.in.booktitle \"booktitle\" output.check\n      format.bvolume output\n      format.number.series output\n      format.pages output\n      address \"address\" bibinfo.check output\n      new.sentence\n      organization \"organization\" bibinfo.check output\n      publisher \"publisher\" bibinfo.check output\n    }\n    { format.incoll.inproc.crossref output.nonnull\n      format.pages output\n    }\n  if$\n  new.block\n  format.note output\n  fin.entry\n}\nFUNCTION {conference} { inproceedings }\nFUNCTION {manual}\n{ output.bibitem\n  format.authors output\n  author format.key output\n  format.date \"year\" output.check\n  date.block\n  format.btitle \"title\" output.check\n  format.edition output\n  organization address new.block.checkb\n  organization \"organization\" bibinfo.check output\n  address \"address\" bibinfo.check output\n  new.block\n  format.note output\n  fin.entry\n}\n\nFUNCTION {mastersthesis}\n{ output.bibitem\n  format.authors \"author\" output.check\n  author format.key output\n  format.date \"year\" output.check\n  date.block\n  format.title\n  \"title\" output.check\n  new.block\n  bbl.mthesis format.thesis.type output.nonnull\n  school \"school\" bibinfo.warn output\n  address \"address\" bibinfo.check output\n  month \"month\" bibinfo.check output\n  new.block\n  format.note output\n  fin.entry\n}\n\nFUNCTION {misc}\n{ output.bibitem\n  format.authors output\n  author format.key output\n  format.date \"year\" output.check\n  date.block\n  format.title output\n  new.block\n  howpublished \"howpublished\" bibinfo.check output\n  new.block\n  format.note output\n  fin.entry\n}\nFUNCTION {phdthesis}\n{ output.bibitem\n  format.authors \"author\" output.check\n  author format.key output\n  format.date \"year\" output.check\n  date.block\n  format.btitle\n  \"title\" output.check\n  new.block\n  bbl.phdthesis format.thesis.type output.nonnull\n  school \"school\" bibinfo.warn output\n  address \"address\" bibinfo.check output\n  new.block\n  format.note output\n  fin.entry\n}\n\nFUNCTION {proceedings}\n{ output.bibitem\n  format.editors output\n  editor format.key output\n  format.date \"year\" output.check\n  date.block\n  format.btitle \"title\" output.check\n  format.bvolume output\n  format.number.series output\n  new.sentence\n  publisher empty$\n    { format.organization.address output }\n    { organization \"organization\" bibinfo.check output\n      new.sentence\n      format.publisher.address output\n    }\n  if$\n  new.block\n  format.note output\n  fin.entry\n}\n\nFUNCTION {techreport}\n{ output.bibitem\n  format.authors \"author\" output.check\n  author format.key output\n  format.date \"year\" output.check\n  date.block\n  format.title\n  \"title\" output.check\n  new.block\n  format.tr.number output.nonnull\n  institution \"institution\" bibinfo.warn output\n  address \"address\" bibinfo.check output\n  new.block\n  format.note output\n  fin.entry\n}\n\nFUNCTION {unpublished}\n{ output.bibitem\n  format.authors \"author\" output.check\n  author format.key output\n  format.date \"year\" output.check\n  date.block\n  format.title \"title\" output.check\n  new.block\n  format.note \"note\" output.check\n  fin.entry\n}\n\nFUNCTION {default.type} { misc }\nREAD\nFUNCTION {sortify}\n{ purify$\n  \"l\" change.case$\n}\nINTEGERS { len }\nFUNCTION {chop.word}\n{ 's :=\n  'len :=\n  s #1 len substring$ =\n    { s len #1 + global.max$ substring$ }\n    's\n  if$\n}\nFUNCTION {format.lab.names}\n{ 's :=\n  \"\" 't :=\n  s #1 \"{vv~}{ll}\" format.name$\n  s num.names$ duplicate$\n  #2 >\n    { pop$\n      \" \" * bbl.etal *\n    }\n    { #2 <\n        'skip$\n        { s #2 \"{ff }{vv }{ll}{ jj}\" format.name$ \"others\" =\n            {\n              \" \" * bbl.etal *\n            }\n            { bbl.and space.word * s #2 \"{vv~}{ll}\" format.name$\n              * }\n          if$\n        }\n      if$\n    }\n  if$\n}\n\nFUNCTION {author.key.label}\n{ author empty$\n    { key empty$\n        { cite$ #1 #3 substring$ }\n        'key\n      if$\n    }\n    { author format.lab.names }\n  if$\n}\n\nFUNCTION {author.editor.key.label}\n{ author empty$\n    { editor empty$\n        { key empty$\n            { cite$ #1 #3 substring$ }\n            'key\n          if$\n        }\n        { editor format.lab.names }\n      if$\n    }\n    { author format.lab.names }\n  if$\n}\n\nFUNCTION {editor.key.label}\n{ editor empty$\n    { key empty$\n        { cite$ #1 #3 substring$ }\n        'key\n      if$\n    }\n    { editor format.lab.names }\n  if$\n}\n\nFUNCTION {calc.short.authors}\n{ type$ \"book\" =\n  type$ \"inbook\" =\n  or\n    'author.editor.key.label\n    { type$ \"proceedings\" =\n        'editor.key.label\n        'author.key.label\n      if$\n    }\n  if$\n  'short.list :=\n}\n\nFUNCTION {calc.label}\n{ calc.short.authors\n  short.list\n  \"(\"\n  *\n  year duplicate$ empty$\n  short.list key field.or.null = or\n     { pop$ \"\" }\n     'skip$\n  if$\n  *\n  'label :=\n}\n\nFUNCTION {sort.format.names}\n{ 's :=\n  #1 'nameptr :=\n  \"\"\n  s num.names$ 'numnames :=\n  numnames 'namesleft :=\n    { namesleft #0 > }\n    { s nameptr\n      \"{ll{ }}{  ff{ }}{  jj{ }}\"\n      format.name$ 't :=\n      nameptr #1 >\n        {\n          \"   \"  *\n          namesleft #1 = t \"others\" = and\n            { \"zzzzz\" * }\n            { t sortify * }\n          if$\n        }\n        { t sortify * }\n      if$\n      nameptr #1 + 'nameptr :=\n      namesleft #1 - 'namesleft :=\n    }\n  while$\n}\n\nFUNCTION {sort.format.title}\n{ 't :=\n  \"A \" #2\n    \"An \" #3\n      \"The \" #4 t chop.word\n    chop.word\n  chop.word\n  sortify\n  #1 global.max$ substring$\n}\nFUNCTION {author.sort}\n{ author empty$\n    { key empty$\n        { \"to sort, need author or key in \" cite$ * warning$\n          \"\"\n        }\n        { key sortify }\n      if$\n    }\n    { author sort.format.names }\n  if$\n}\nFUNCTION {author.editor.sort}\n{ author empty$\n    { editor empty$\n        { key empty$\n            { \"to sort, need author, editor, or key in \" cite$ * warning$\n              \"\"\n            }\n            { key sortify }\n          if$\n        }\n        { editor sort.format.names }\n      if$\n    }\n    { author sort.format.names }\n  if$\n}\nFUNCTION {editor.sort}\n{ editor empty$\n    { key empty$\n        { \"to sort, need editor or key in \" cite$ * warning$\n          \"\"\n        }\n        { key sortify }\n      if$\n    }\n    { editor sort.format.names }\n  if$\n}\nFUNCTION {presort}\n{ calc.label\n  label sortify\n  \"    \"\n  *\n  type$ \"book\" =\n  type$ \"inbook\" =\n  or\n    'author.editor.sort\n    { type$ \"proceedings\" =\n        'editor.sort\n        'author.sort\n      if$\n    }\n  if$\n  #1 entry.max$ substring$\n  'sort.label :=\n  sort.label\n  *\n  \"    \"\n  *\n  title field.or.null\n  sort.format.title\n  *\n  #1 entry.max$ substring$\n  'sort.key$ :=\n}\n\nITERATE {presort}\nSORT\nSTRINGS { last.label next.extra }\nINTEGERS { last.extra.num number.label }\nFUNCTION {initialize.extra.label.stuff}\n{ #0 int.to.chr$ 'last.label :=\n  \"\" 'next.extra :=\n  #0 'last.extra.num :=\n  #0 'number.label :=\n}\nFUNCTION {forward.pass}\n{ last.label label =\n    { last.extra.num #1 + 'last.extra.num :=\n      last.extra.num int.to.chr$ 'extra.label :=\n    }\n    { \"a\" chr.to.int$ 'last.extra.num :=\n      \"\" 'extra.label :=\n      label 'last.label :=\n    }\n  if$\n  number.label #1 + 'number.label :=\n}\nFUNCTION {reverse.pass}\n{ next.extra \"b\" =\n    { \"a\" 'extra.label := }\n    'skip$\n  if$\n  extra.label 'next.extra :=\n  extra.label\n  duplicate$ empty$\n    'skip$\n    { year field.or.null #-1 #1 substring$ chr.to.int$ #65 < \n      { \"{\\natexlab{\" swap$ * \"}}\" * }\n      { \"{(\\natexlab{\" swap$ * \"})}\" * }\n    if$ }\n  if$\n  'extra.label :=\n  label extra.label * 'label :=\n}\nEXECUTE {initialize.extra.label.stuff}\nITERATE {forward.pass}\nREVERSE {reverse.pass}\nFUNCTION {bib.sort.order}\n{ sort.label\n  \"    \"\n  *\n  year field.or.null sortify\n  *\n  \"    \"\n  *\n  title field.or.null\n  sort.format.title\n  *\n  #1 entry.max$ substring$\n  'sort.key$ :=\n}\nITERATE {bib.sort.order}\nSORT\nFUNCTION {begin.bib}\n{ preamble$ empty$\n    'skip$\n    { preamble$ write$ newline$ }\n  if$\n  \"\\begin{thebibliography}{\" number.label int.to.str$ * \"}\" *\n  write$ newline$\n  \"\\expandafter\\ifx\\csname natexlab\\endcsname\\relax\\def\\natexlab#1{#1}\\fi\"\n  write$ newline$\n}\nEXECUTE {begin.bib}\nEXECUTE {init.state.consts}\nITERATE {call.type$}\nFUNCTION {end.bib}\n{ newline$\n  \"\\end{thebibliography}\" write$ newline$\n}\nEXECUTE {end.bib}\n%% End of customized bst file\n%%\n%% End of file `compling.bst'.\n"
  },
  {
    "path": "chapters/coqa/dataset.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{\\sys{CoQA}: A Conversational QA Challenge}\n\\label{sec:coqa-dataset}\n\nIn this section, we introduce \\sys{CoQA}, a novel dataset for building \\tf{Co}nversational \\tf{Q}uestion \\tf{A}nswering systems. We develop \\sys{CoQA} with three main goals in mind. The first concerns the nature of questions in a human conversation. As an example seen in Figure~\\ref{fig:coqa-example}, in this conversation, every question after the first is dependent on the conversation history. At present, there are no large scale reading comprehension datasets which contain questions that depend on a conversation history and this is what \\sys{CoQA} is mainly developed for.\\footnote{Concurrent with our work, \\newcite{choi2018quac} also created a conversational dataset with a similar goal, but it differs in many key design decisions. We will discuss it in Section~\\ref{sec:coqa-future}.}\n\nThe second goal of \\sys{CoQA} is to ensure the naturalness of answers in a conversation. As we discussed in the earlier chapters, most existing reading comprehension datasets either restrict answers to a contiguous span in a given passage, or allow free-form answers with a low human agreement (e.g., \\sys{NarrativeQA}). Our desiderata are 1) the answers should not be only span-based so that anything can be asked and the conversation can flow naturally. For example, there is no extractive answer for $Q_4$ \\ti{How many?} in Figure~\\ref{fig:coqa-example}. 2) It still supports reliable automatic evaluation with a a strong human performance. Therefore, we propose that the answers can be free-form text (abstractive answers), while the extractive spans act as rationales for the actual answers. Therefore, the answer for $Q_4$ is simply \\ti{Three} while its rationale is spanned across multiple sentences.\n\nThe third goal of \\sys{CoQA} is to enable building QA systems that perform robustly across domains. The current reading comprehension datasets mainly focus on a single domain which makes it hard to test the generalization ability of existing models. Hence we collect our dataset from seven different domains --- children's stories, literature, middle and high school English exams, news, Wikipedia, science articles and Reddit. The last two are used for out-of-domain evaluation.\n\n\\subsection{Task Definition}\n\\label{sec:coqa-task}\n\n\\begin{figure}[!t]\n\\begin{tabular}{p{\\columnwidth}}\n\\toprule\nThe Virginia governor's race, billed as the marquee battle of an otherwise anticlimactic 2013 election cycle, is shaping up to be a foregone conclusion. Democrat Terry McAuliffe, the longtime political fixer and moneyman, hasn't trailed in a poll since May. Barring a political miracle, Republican Ken Cuccinelli will be delivering a concession speech on Tuesday evening in Richmond. In recent ...\\\\\n\\\\\n$Q_1$:               What are the candidates {\\bf \\color{magenta} running} for?\\\\\n$A_1$:               Governor\\\\\n$R_1$: The Virginia governor's race\\\\\n\\vspace{0em}\n$Q_2$:               {\\bf \\color{magenta} Where}?\\\\\n$A_2$:               Virginia \\\\\n$R_2$: The Virginia governor's race\\\\\n\\vspace{0em}\n$Q_3$:               Who is the democratic candidate?\\\\\n\\vspace{-0.6em}{\\bf \\color{blue} A$_3$}:               {\\bf \\color{orange} Terry McAuliffe} \\\\\n$R_3$: Democrat Terry McAuliffe\\\\\n\\vspace{0em}\n$Q_4$:               Who is {\\bf \\color{orange} his} opponent?\\\\\n\\vspace{-0.6em}{\\bf \\color{blue} A$_4$}:               {\\bf \\color{red} Ken Cuccinelli} \\\\\n$R_4$ Republican Ken Cuccinelli\\\\\n\\vspace{0em}\n$Q_5$:               What party does {\\bf \\color{red} he} belong to?\\\\\n$A_5$:               Republican \\\\\n$R_5$: Republican Ken Cuccinelli\\\\\n\\vspace{0em}\n$Q_6$:               Which of {\\bf \\color{blue} them} is winning?\\\\\n$A_6$:               Terry McAuliffe \\\\\n$R_6$: Democrat Terry McAuliffe, the longtime political fixer and moneyman, hasn't trailed in a poll since May\\\\\n\\bottomrule\n\\end{tabular}\n\\longcaption{Another example in \\sys{CoQA} with entity of focus changes}{\\label{fig:coqa-example2}A conversation showing coreference chains in colors. The entity of focus changes in $Q_4$, $Q_5$, $Q_6$.}\n\\end{figure}\n\nWe first define the task formally. Given a passage $P$, a conversation consists of $n$ turns, and each turn consists of $(Q_i, A_i, R_i), i = 1, \\ldots n$, where $Q_i$ and $A_i$ denote the question and the answer in the $i$-th turn, and $R_i$ is the rationale which supports the answer $A_i$ and must be a single span of the passage. The task is defined as to answer the next question $Q_i$ given the conversation so far: $Q_1, A_1, \\ldots, Q_{i -1}, A_{i - 1}$. It is worth noting that we collect $R_i$ with the hope that they can help understand how answers are derived and improve training our models, while \\ti{they are not provided during evaluation}.\n\nFor the example in Figure~\\ref{fig:coqa-example2}, the conversation begins with question $Q_1$. We answer $Q_1$ with $A_1$ based on the evidence $R_1$ from the passage. In this example, the answerer wrote only the \\ti{Governor} as the answer but selected a longer rationale \\ti{The Virginia governor's race}. When we come to $Q_2$ \\ti{Where?}, we must refer back to the conversation history since otherwise its answer could be \\ti{Virginia} or \\ti{Richmond} or something else. In our task, conversation history is indispensable for answering many questions. We use conversation history $Q_1$ and $A_1$ to answer $Q_2$ with $A_2$ based on the evidence $R_2$.  For an unanswerable question, we give \\ti{unknown} as the final answer and do not highlight any rationale.\n\nIn this example, we observe that the entity of focus changes as the conversation progresses. The questioner uses \\ti{his} to refer to \\ti{Terry} in $Q_4$ and \\ti{he} to \\ti{Ken} in $Q_5$. If these are not resolved correctly, we end up with incorrect answers. The conversational nature of questions requires us to reason from multiple sentences (the current question and the previous questions or answers, and sentences from the passage). It is common that a single question may require a rationale spanned across multiple sentences (e.g., $Q_1$ $Q_4$ and $Q_5$ in Figure~\\ref{fig:coqa-example}). We describe additional question and answer types in \\ref{sec:coqa-data-analysis}.\n\n\n\\subsection{Dataset Collection}\nWe detail our dataset collection process as follows. For each conversation, we employ two annotators, a questioner and an answerer. This setup has several advantages over using a single annotator to act both as a questioner and an answerer:\n1) when two annotators chat about a passage, their dialogue flow is natural compared to chatting with oneself; 2) when one annotator responds with a vague question or an incorrect answer, the other can raise a flag which we use to identify bad workers; and 3) the two annotators can discuss guidelines (through a separate chat window) when they have disagreements. These measures help to prevent spam and to obtain high agreement data.\\footnote{Due to AMT terms of service, we allowed a single worker to act both as a questioner and an answerer after a minute of waiting. This constitutes around 12\\% of the data.}\n\n\\begin{figure}[!t]\n  \\center\n  \\includegraphics[scale=0.18]{img/coqa_questioner.png}\n  \\longcaption{The questioner interface of \\sys{CoQA}}{\\label{fig:coqa-questioner}The questioner interface of our \\sys{CoQA} dataset.}\n\\end{figure}\n\n\\begin{figure}[!t]\n  \\center\n  \\includegraphics[scale=0.18]{img/coqa_answerer.png}\n  \\longcaption{The answerer interface of \\sys{CoQA}}{\\label{fig:coqa-answerer}The answerer interface of our \\sys{CoQA} dataset.}\n\\end{figure}\n\nWe use the Amazon Mechanical Turk (AMT) to pair workers on a on a passage for which we use the ParlAI Mturk API \\cite{miller2017parlai}. On average, each passage costs 3.6 USD for conversation collection and another 4.5 USD for collecting three additional answers for development and test data.\n\n\n\\paragraph{Collection interface.} We have different interfaces for a questioner and an answerer (Figure~\\ref{fig:coqa-questioner} and Figure~\\ref{fig:coqa-answerer}). A questioner's role is to ask questions, and an answerer's role is to answer questions in addition to highlighting rationales. We want questioners to avoid using exact words in the passage in order to increase lexical diversity. When they type a word that is already present in the passage, we alert them to paraphrase the question if possible. For the answers, we want answerers to stick to the vocabulary in the passage in order to limit the number of possible answers. We encourage this by automatically copying the highlighted text into the answer box and allowing them to edit copied text in order to generate a natural answer. We found 78\\% of the answers have at least one edit such as changing a word's case or adding a punctuation.\n\n\\paragraph{Passage selection.} We select passages from seven diverse domains: children's stories from MCTest \\cite{richardson2013mctest}, literature from Project Gutenberg\\footnote{Project Gutenberg \\url{https://www.gutenberg.org}}, middle and high school English exams from RACE \\cite{lai2017race}, news articles from CNN \\cite{hermann2015teaching}, articles from Wikipedia, science articles from AI2 Science Questions \\cite{welbl2017crowdsourcing} and Reddit articles from the Writing Prompts dataset \\cite{fan2018hierarchical}.\n\nNot all passages in these domains are equally good for generating interesting conversations.\nA passage with just one entity often result in questions that entirely focus on that entity.\nWe select passages with multiple entities, events and pronominal references  using Stanford \\sys{CoreNLP} \\cite{manning2014stanford}. We truncate long articles to the first few paragraphs that result in around 200 words.\n\nTable~\\ref{tab:coqa-domains} shows the distribution of domains.\nWe reserve the Science and Reddit domains for out-of-domain evaluation. For each in-domain dataset, we split the data such that there are 100 passages in the development set, 100 passages in the test set, and the rest in the training set. In contrast, for each out-of-domain dataset, we just have 100 passages in the test set without any passages in the training or the development sets.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrrrr}\n\\toprule\n\\tf{Domain} &  \\tf{\\# Passages} &  \\tf{\\# Q/A} & \\tf{Passage}  &  \\tf{\\# Turns per} \\\\\n & & \\tf{pairs} & \\tf{length} & \\tf{passage} \\\\\n\\midrule\nChildren's Stories  & 750 & 10.5k & 211 &  14.0 \\\\\nLiterature  & 1,815 & 25.5k & 284  & 15.6 \\\\\nMid/High School Exams & 1,911 & 28.6k & 306  & 15.0 \\\\\nNews & 1,902 & 28.7k & 268 &  15.1 \\\\\nWikipedia & 1,821 & 28.0k & 245  & 15.4 \\\\\n\\midrule\n\\multicolumn{5}{c}{Out of domain} \\\\\n\\midrule\nScience & 100 & 1.5k & 251  & 15.3\\\\\nReddit & 100 & 1.7k & 361 & 16.6 \\\\\n\\midrule\nTotal & 8,399 & 127k  & 271 & 15.2 \\\\\n\\bottomrule\n\\end{tabular}\n\\longcaption{Distribution of domains in \\sys{CoQA}.}{\\label{tab:coqa-domains} Distribution of domains in \\sys{CoQA}.}\n\\end{table}\n\n\\paragraph{Collecting multiple answers.} Some questions in \\sys{CoQA} may have multiple valid answers. For example, another answer for Q$_4$ in Figure~\\ref{fig:coqa-example2} is \\ti{A Republican candidate}. In order to account for answer variations, we collect three additional answers for all questions in the development and test data. Since our data is conversational, questions influence answers which in turn influence the follow-up questions. In the previous example, if the original answer was \\ti{A Republican Candidate}, then the following question \\ti{Which party does he belong to?} would not have occurred in the first place. When we show questions from an existing conversation to new answerers, it is likely they will deviate from the original answers which makes the conversation incoherent. It is thus important to bring them to a common ground with the original answer.\n\nWe achieve this by turning the answer collection task into a game of predicting original answers.\nFirst, we show a question to a new answerer, and when she answers it, we show the original answer and ask her to verify if her answer matches the original.\nFor the next question, we ask her to guess the original answer and verify again.\nWe repeat this process until the conversation is complete.\nIn our pilot experiment, the human F1 score increased by 5.4\\% when we use this verification setup.\n\n\n\\subsection{Dataset Analysis}\n\\label{sec:coqa-data-analysis}\n\nWhat makes the \\sys{CoQA} dataset conversational compared to existing reading comprehension datasets like \\sys{SQuAD}? How does the conversation flow from one turn to the other? What linguistic phenomena do the questions in \\sys{CoQA} exhibit? We answer these questions below.\n\n\\paragraph{Comparison with \\sys{SQuAD 2.0}.}\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[height=8cm]{img/coqa_squad_comparison.pdf}\n\\end{center}\n\\longcaption{A comparison of questions in \\sys{CoQA} and \\sys{SQuAD 2.0} }{\\label{fig:coqa-squad-comparison} Distribution of trigram prefixes of questions in \\sys{SQuAD 2.0} and \\sys{CoQA}.}\n\\end{figure}\n\nIn the following, we perform an in-depth comparison of \\sys{CoQA} and \\sys{SQuAD 2.0}~\\cite{rajpurkar2018know}.  Figure~\\ref{fig:coqa-squad-comparison} shows the distribution of frequent trigram prefixes. While coreferences are non-existent in \\sys{SQuAD 2.0}, almost every sector of \\sys{CoQA} contains coreferences (\\ti{he, him, she, it, they})  indicating \\sys{CoQA} is highly conversational. Because of the free-form nature of answers, we expect a richer variety of questions in \\sys{CoQA} than \\sys{SQuAD 2.0}.\nWhile nearly half of \\sys{SQuAD 2.0} questions are dominated by \\ti{what} questions, the distribution of \\sys{CoQA} is spread across multiple question types. Several sectors indicated by prefixes \\ti{did, was, is, does, and} are frequent in \\sys{CoQA} but are completely absent in \\sys{SQuAD 2.0}.\n\nSince a conversation is spread over multiple turns, we expect conversational questions and answers to be shorter than in a standalone interaction. In fact, questions in \\sys{CoQA} can be made up of just one or two words (\\ti{who?}, \\ti{when?},  \\ti{why?}).\nAs seen in Table~\\ref{tab:squad-coqa-length}, on average, a question in \\sys{CoQA} is only 5.5 words long while it is 10.1 for \\sys{SQuAD}. The answers are also usually shorter in \\sys{CoQA} than \\sys{SQuAD 2.0}.\n\nTable~\\ref{tab:squad-coqa-answers} provides insights into the type of answers in \\sys{SQuAD 2.0} and \\sys{CoQA}.\nWhile the original version of \\sys{SQuAD 2.0}  \\cite{rajpurkar2016squad} does not have any unanswerable questions, \\sys{SQuAD 2.0} \\cite{rajpurkar2018know} focuses solely on obtaining them resulting in higher frequency than in \\sys{CoQA}. \\sys{SQuAD 2.0} has 100\\% extractive answers by design, whereas in \\sys{CoQA}, 66.8\\% answers can be classified as extractive after ignoring punctuation and case mismatches.\\footnote{If punctuation and case are not ignored, only 37\\% of the answers are extractive.}\nThis is higher than we anticipated. Our conjecture is that human factors such as wage may have influenced workers to ask questions that elicit faster responses by selecting text. It is worth noting that \\sys{CoQA} has 11.1\\% and 8.7\\% questions with \\ti{yes} or \\ti{no} as answers whereas \\sys{SQuAD 2.0} has 0\\%. Both datasets have a high number of named entities and noun phrases as answers.\n\n\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{p{3cm} r r}\n\\toprule\n & \\bf \\sys{SQuAD 2.0}  & \\bf \\sys{CoQA} \\\\\n\\midrule\nPassage Length & 117 & 271 \\\\\nQuestion Length & 10.1 & 5.5 \\\\\nAnswer  Length & 3.2 & 2.7 \\\\\n\\midrule\n\\end{tabular}\n\\longcaption{Data statistics in \\sys{SQuAD 2.0} and \\sys{CoQA}}{\\label{tab:squad-coqa-length} Average number of words in passage, question and answer in \\sys{SQuAD 2.0} and \\sys{CoQA}.}\n\\end{table}\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{p{3.5cm} r r}\n\\toprule\n& \\bf \\sys{SQuAD 2.0}   & \\bf \\sys{CoQA}  \\\\\n\\midrule\nAnswerable & 66.7\\% & 98.7\\% \\\\\nUnanswerable & 33.3\\% & 1.3\\% \\\\\n\\midrule\nExtractive & 100.0\\% & 66.8\\% \\\\\nAbstractive & 0.0\\% & 33.2\\% \\\\\n\\midrule\nNamed Entity & 35.9\\% & 28.7\\% \\\\\nNoun Phrase & 25.0\\% & 19.6\\% \\\\\nYes & 0.0\\% & 11.1\\% \\\\\nNo & 0.1\\% & 8.7\\% \\\\\nNumber & 16.5\\% & 9.8\\% \\\\\nDate/Time & 7.1\\% & 3.9\\% \\\\\nOther & 15.5\\% & 18.1\\% \\\\\n\\bottomrule\n\\end{tabular}\n\\longcaption{Distribution of answer types in \\sys{SQuAD 2.0} and \\sys{CoQA}}{\\label{tab:squad-coqa-answers} Distribution of answer types in \\sys{SQuAD 2.0} and \\sys{CoQA}.}\n\\end{table}\n\n\\paragraph{Conversation flow.}\nA coherent conversation must have smooth transitions between turns.\nWe expect the narrative structure of the passage to influence our conversation flow.\nWe split the passage into 10 uniform chunks, and identify chunks of interest of a given turn and its transition based on rationale spans.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[height=9cm]{img/coqa_conversation_flow.pdf}\n\\end{center}\n\\longcaption{Conversation Flow in \\sys{CoQA}}{\\label{fig:coqa-conversation-flow} Chunks of interests as a conversation progresses. The x-axis indicates the turn number and\nthe y-axis indicates the passage chunk containing the rationale. The height of a chunk indicates the concentration of conversation in that chunk. The width of the bands is proportional to the frequency of transition between chunks from one turn to the other.}\n\\end{figure}\n\n\nFigure~\\ref{fig:coqa-conversation-flow} portrays the conversation flow of the top 10 turns.\nThe starting turns tend to focus on the first few chunks and as the conversation advances, the focus shifts to the later chunks. Moreover, the turn transitions are smooth, with the focus often remaining in the same chunk or moving to a neighbouring chunk. Most frequent transitions happen to the first and the last chunks, and likewise these chunks have diverse outward transitions.\n\n\\paragraph{Linguistic phenomena.}\n\n\\begin{table}[!t]\n\\centering\n\\small\n\\begin{tabular}{lp{7cm}c}\n\\toprule\n\\bf Phenomenon & \\bf Example & \\bf Percentage \\\\\n\\midrule\n\\multicolumn{3}{c}{Relationship between a question and its passage} \\\\\n\\midrule\nLexical match & Q: Who had to rescue her?& 29.8\\% \\\\\n& A: the coast guard \\\\\n& R: Outen was rescued by the coast guard \\\\\nParaphrasing & Q: Did the wild dog approach? & 43.0\\% \\\\\n& A: Yes \\\\\n& R: he drew cautiously closer \\\\\nPragmatics &  Q:               Is Joey a male or female?  &  27.2\\% \\\\\n & A:  Male \\\\\n& R: it looked like a stick man so she kept \\textbf{him}. She named her new noodle friend Joey \\\\\n\\midrule\n\\multicolumn{3}{c}{Relationship between a question and its conversation history} \\\\\n\\midrule\nNo coreference & Q: What is IFL? & 30.5\\% \\\\\nExplicit coreference & Q: Who had Bashti forgotten? & 49.7\\% \\\\\n& A: the puppy \\\\\n& Q: What was \\textbf{his} name? \\\\\nImplicit coreference & Q: When will Sirisena be sworn in? & 19.8\\% \\\\\n& A: 6 p.m local time  \\\\\n& Q: \\textbf{Where}?\\\\\n\\bottomrule\n\\end{tabular}\n\\longcaption{Linguistic phenomena in \\sys{CoQA} questions}{\\label{tab:ling-phenomena}Linguistic phenomena in \\sys{CoQA} questions.}\n\\end{table}\nWe further analyze the questions for their relationship with the passages and the conversation history. We sample 150 questions in the development set and annotate various phenomena as shown in Table~\\ref{tab:ling-phenomena}.\n\nIf a question contains at least one content word that appears in the passage, we classify it as \\ti{lexical match}. These comprise around 29.8\\% of the questions. If it has no lexical match but is a paraphrase of the rationale, we classify it as \\ti{paraphrasing}. These questions contain phenomena such as synonymy, antonymy, hypernymy, hyponymy and negation.\nThese constitute a large portion of questions, around 43.0\\%. The rest, 27.2\\%, have no lexical cues, and we classify them under \\ti{pragmatics}. These include phenomena like common sense and presupposition. For example, the question \\ti{Was he loud and boisterous?} is not a direct paraphrase of the rationale \\ti{he dropped his feet with the lithe softness of a cat} but the rationale combined with world knowledge can answer this question.\n\nFor the relationship between a question and its conversation history, we classify questions into whether they are dependent or independent on the conversation history. If dependent, whether the questions contain an explicit marker or not.\n\nAs a result, around 30.5\\% questions do not rely on coreference with the conversational history and are answerable on their own. Almost half of the questions (49.7\\%) contain explicit coreference markers such as \\ti{he, she, it}. These either refer to an entity or an event introduced in the conversation.\nThe remaining 19.8\\% do not have explicit coreference markers but refer to an entity or event implicitly.\n"
  },
  {
    "path": "chapters/coqa/discussions.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Discussion}\n\\label{sec:coqa-future}\n\nSo far, we have discussed the \\sys{CoQA} dataset and several competitive baselines based on conversational models and reading comprehension models. We hope that our efforts can enable the first step to building conversational QA agents.\n\nOn the one hand, we think there is ample room for further improving performance on \\sys{CoQA}: our hybrid system obtains an F1 score of 65.1\\%, which is still 23.7 points behind the human performance (88.8\\%). We encourage our research community to work on this dataset and push the limits of conversational question answering models. We think there are several directions for further improvement:\n\n\\begin{itemize}\n    \\item\n        All the baseline models we built only use the conversation history by simply concatenating the previous questions and answers with the current question. We think that there should be better ways to connect the history and the current question. For the questions in Table~\\ref{tab:ling-phenomena}, we should build models to actually understand that \\ti{his} in the question \\ti{What was his name?} refers to \\ti{the puppy}, and the question \\ti{Where?} means \\ti{Where will Sirisena be sworn in?}. Indeed, a recent model \\sys{FlowQA}~\\cite{huang2018flowqa} proposed a solution to effectively stack single-turn models along the conversational flow and demonstrated a state-of-the-art performance on \\sys{CoQA}.\n    \\item\n        Our hybrid model aims to combine the advantages from the span prediction reading comprehension models and the pointer-generator network model to address the nature of abstractive answers. However, we implemented it as a pipeline model so the performance of the second component depends on whether the reading comprehension model can extract the right piece of evidence from the passage. We think that it is desirable to build an end-to-end model which can extract rationales while also rewriting the rationale into the final answer.\n    \\item\n        We think the rationales that we collected can be better leveraged into training models.\n\\end{itemize}\n\nOn the other hand, \\sys{CoQA} certainly has its limitations and we should explore more challenging and more useful datasets in the future. One clear limitation is that the conversations in \\sys{CoQA} are only turns of question and answer pairs. That means the answerer is only responsible for answering questions while she can't ask any clarification questions or communicate with the questioner through conversations.  Another problem is that \\sys{CoQA} has very few (1.3\\%) unanswerable questions, which we think are crucial in practical conversational QA systems.\n\n\nIn parallel to our work, \\newcite{choi2018quac} also created a dataset of conversations in the form of questions and answers on text passages. In our interface, we show a passage to both the questioner and the answerer, whereas their interface only shows a title to the questioner and the full passage to the answerer. Since their setup encourages the answerer to reveal more information for the following questions, their answers are as long as 15.1 words on average (ours is 2.7). While the human performance on our test set is 88.8 F1, theirs is 74.6 F1. Moreover, while \\sys{CoQA}'s answers can be abstractive, their answers are restricted to only extractive text spans. Our dataset contains passages from seven diverse domains, whereas their dataset is built only from Wikipedia articles about people. Also, concurrently, \\newcite{saeidi2018interpretation} created a conversational QA dataset for regulatory text such as tax and visa regulations. Their answers are limited to \\textit{yes} or \\textit{no} along with a positive characteristic of permitting to ask clarification questions when a given question cannot be answered.\n"
  },
  {
    "path": "chapters/coqa/experiments.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Experiments}\n\\label{sec:coqa-experiments}\n\n\\subsection{Setup}\nFor the \\sys{seq2seq} and \\sys{PGNet} experiments, we use the \\sys{OpenNMT} toolkit \\cite{klein2017opennmt}.\nFor the reading comprehension experiments, we use the same implementation that we used for \\sys{SQuAD}~\\cite{chen2017reading}.\nWe tune the hyperparameters on the development data: the number of turns to use from the conversation history, the number of layers, number of each hidden units per layer and dropout rate.\nWe initialize the word projection matrix with \\sys{GloVe} \\cite{pennington2014glove} for conversational models and \\sys{fastText} \\cite{bojanowski2017enriching} for reading comprehension models, based on empirical performance. We update the projection matrix during training in order to learn embeddings for delimiters such as $\\mathrm{<}q\\mathrm{>}$.\n\nFor all the experiments of \\sys{seq2seq} and \\sys{PGNet}, we use the default settings of \\sys{OpenNMT}: 2-layers of LSTMs with $500$ hidden units for both the encoder and the decoder. The models are optimized using SGD, with an initial learning rate of $1.0$ and a decay rate of $0.5$. A dropout rate of $0.3$ is applied to all layers.\n\nFor all the reading comprehension experiments, the best configuration we find is 3 layers of LSTMs with $300$ hidden units for each layer. A dropout rate of $0.4$ is applied to all LSTM layers and a dropout rate of $0.5$ is applied to word embeddings.\n\n\\subsection{Experimental Results}\nTable~\\ref{tab:coqa-results} presents the results of the models on the development and the test data. Considering the results on the test set, the \\sys{seq2seq} model performs the worst, generating frequently occurring answers irrespective of whether these answers appear in the passage or not, a well known behavior of conversational models \\cite{li2016diversity}. \\sys{PGNet} alleviates the frequent response problem by focusing on the vocabulary in the passage and it outperforms \\sys{seq2seq} by 17.8 points. However, it still lags behind \\sys{Stanford Attentive Reader} by 8.5 points.\nA reason could be that \\sys{PGNet} has to memorize the whole passage before answering a question, a huge overhead which \\sys{Stanford Attentive Reader} avoids. But \\sys{Stanford Attentive Reader} fails miserably in answering questions with free-form answers (see row \\textit{Abstractive} in Table ~\\ref{tab:error-analysis}).\nWhen the \\sys{Stanford Attentive Reader} is fed into \\sys{PGNet}, we empower both \\sys{Stanford Attentive Reader} and \\sys{PGNet} --- \\sys{Stanford Attentive Reader} in producing free-form answers; \\sys{PGNet} in focusing on the rationale instead of the passage. This combination outperforms the \\sys{PGNet} and the \\sys{Stanford Attentive Reader} models by 21.0 and 12.5 points respectively.\n\n\\begin{table}\n\\small\n\\centering\n\\begin{tabular}{l | c c c c c | c c |  c}\n\\hline\n&  \\multicolumn{5}{c|}{\\tf{In-domain}} & \\multicolumn{2}{c|}{\\tf{Out-of-domain}} & \\tf{Overall} \\\\\n&  Children & Literature & Exam & News & Wikipedia & Reddit & Science &  \\\\\n\\hline\n\\multicolumn{9}{c}{\\tf{Development data}}\\\\\n\\hline\n\\sys{seq2seq} & 30.6 & 26.7 & 28.3 & 26.3 & 26.1 & N/A & N/A & 27.5 \\\\\n\\sys{PGNet} & 49.7 & 42.4 & 44.8 & 45.5 & 45.0 & N/A & N/A & 45.4 \\\\\n\\sys{SAR} & 52.4 & 52.6 & 51.4 & 56.8 & 60.3 & N/A & N/A & 54.7 \\\\\n\\sys{Hybrid} & \\bf 64.5 & \\bf 62.0 & \\bf 63.8 & \\bf 68.0 & \\bf 72.6 & N/A & N/A & \\bf 66.2 \\\\\n\\sys{Human} & 90.7 & 88.3 & 89.1 & 89.9 & 90.9 & N/A & N/A  & 89.8 \\\\\n\\hline\n\\multicolumn{9}{c}{\\tf{Test data}}\\\\\n\\hline\n\\sys{seq2seq} & 32.8 & 25.6 & 28.0 & 27.0 & 25.3 & 25.6 & 20.1  & 26.3 \\\\\n\\sys{PGNet} & 49.0 & 43.3 & 47.5 & 47.5 & 45.1 & 38.6 & 38.1  & 44.1 \\\\\n\\sys{SAR} & 46.7 & 53.9 & 54.1 & 57.8 & 59.4 & 45.0 & 51.0 & 52.6 \\\\\n\\sys{Hybrid} & \\bf 64.2 & \\bf 63.7 & \\bf  67.1 & \\bf 68.3 & \\bf 71.4 & \\bf 57.8 & \\bf 63.1  & \\bf 65.1  \\\\\n\\sys{Human} & 90.2 & 88.4 & 89.8 & 88.6 & 89.9 & 86.7 & 88.1 & 88.8 \\\\\n\\hline\n\\end{tabular}\n\\longcaption{Models and human performance on \\sys{CoQA}}{\\label{tab:coqa-results}Models and human performance (F1 score) on the development and the test data. \\sys{SAR}: \\sys{Stanford Attentive Reader}.}\n\\end{table}\n\n\\paragraph{Models vs. Humans.}\nThe human performance on the test data is 88.8 F1, a strong agreement indicating that the \\sys{CoQA}'s questions have concrete answers.\nOur best model is 23.7 points behind humans, suggesting that the task is difficult to accomplish with current models.\nWe anticipate that using a state-of-the-art reading comprehension model \\cite{devlin2018bert} may improve the results by a few points.\n\n\\paragraph{In-domain~vs.~Out-of-domain.}\nAll models perform worse on out-of-domain datasets compared to in-domain datasets. The best model drops by 6.6 points. For in-domain results, both the best model and humans find the literature domain harder than the others since literature's vocabulary requires proficiency in English. For out-of-domain results, the Reddit domain is apparently harder. This could be because Reddit requires reasoning on longer passages (see Table~\\ref{tab:coqa-domains}).\n\nWhile humans achieve high performance on children's stories,  models perform poorly, probably due to the fewer training examples in this domain compared to others.\\footnote{We collect children's stories from MCTest which contains only 660 passages in total, of which we use 200 stories for development and test.}\nBoth humans and models find Wikipedia easy.\n\n\\subsection{Error Analysis}\n\n\\begin{table}[!t]\n\\centering\n\\begin{tabular}{p{4cm}ccccc}\n\\toprule\n\\tf{Type} & \\sys{seq2seq} & \\sys{PGNet} & \\sys{SAR} & \\sys{Hybrid} & \\sys{Human}\\\\\n\\midrule\n\\multicolumn{6}{c}{\\tf{Answer Type}} \\\\\n\\midrule\nAnswerable & 27.5 & 45.4 & 54.7 & 66.3 & 89.9 \\\\\nUnanswerable & 33.9 & 38.2 & 55.0 & 51.2 & 72.3 \\\\\n\\midrule\nExtractive & 20.2 & 43.6 & 69.8 & 70.5 & 91.1 \\\\\nAbstractive & 43.1 & 49.0 & 22.7 & 57.0 & 86.8 \\\\\n\\midrule\nNamed Entity & 21.9 & 43.0 & 72.6 & 72.2 & 92.2 \\\\\nNoun Phrase & 17.2 & 37.2 & 64.9 & 64.1 & 88.6 \\\\\nYes & 69.6 & 69.9 & 7.9\\; & 72.7 & 95.6 \\\\\nNo & 60.2 & 60.3 & 18.4 & 58.7 & 95.7 \\\\\nNumber & 15.0 & 48.6 & 66.3 & 71.7 & 91.2 \\\\\nDate/Time & 13.7\\; & 50.2 & 79.0 & 79.1 & 91.5 \\\\\nOther & 14.1 & 33.7 & 53.5 & 55.2 & 80.8 \\\\\n\\midrule\n\\multicolumn{6}{c}{\\tf{Question Type}} \\\\\n\\midrule\nLexical Matching & 20.7 &  40.7 & 57.2 & 65.7 & 91.7 \\\\\nParaphrasing &  23.7 & 33.9 & 46.9 & 64.4 & 88.8 \\\\\nPragmatics  & 33.9 & 43.1 & 57.4 & 60.6 & 84.2 \\\\\n\\midrule\nNo coreference & 16.1  & 31.7 & 54.3 & 57.9 & 90.3  \\\\\nExplicit coreference & 30.4 & 42.3 & 49.0 & 66.3 & 87.1 \\\\\nImplicit coreference & 31.4 & 39.0 & 60.1 & 66.4 & 88.7 \\\\\n\\bottomrule\n\\end{tabular}\n\\longcaption{Error anlaysis on \\sys{CoQA}}{\\label{tab:error-analysis} Fine-grained results of different question and answer types in the development set. For the question type results, we only analyze 150 questions as described in Section~\\ref{sec:coqa-data-analysis}.}\n\\end{table}\n\nTable~\\ref{tab:error-analysis} presents fine-grained results of models and humans on the development set. We observe that humans have the highest disagreement on the unanswerable questions.\nSometimes, people guess an answer even when it is not present in the passage, e.g., one can guess the age of \\textit{Annie} in Figure~\\ref{fig:coqa-example} based on her \\textit{grandmother}'s age.\nThe human agreement on abstractive answers is lower than on extractive answers.\nThis is expected because our evaluation metric is based on word overlap rather than on the meaning of words.\nFor the question \\textit{did Jenny like her new room?},  human answers \\textit{she loved it} and \\textit{yes} are both accepted.\n\nFinding the perfect evaluation metric for abstractive responses is still a challenging problem \\cite{liu2016not} and beyond the scope of our work.\nFor our models' performance, \\sys{seq2seq} and \\sys{PGNet} perform well on the questions with abstractive answers, and \\sys{Stanford Attentive Reader} performs well on the questions with extractive answers, due to their respective designs.\nThe combined model improves on both categories.\n\nAmong the lexical question types, humans find the questions with lexical matches the easiest followed by paraphrasing, and the questions with pragmatics the hardest --- this is expected since questions with lexical matches and paraphrasing share some similarity with the passage, thus making them relatively easier to answer than pragmatic questions.\nThe best model also follows the same trend.\nWhile humans find the questions without coreferences easier than those with coreferences (explicit or implicit), the models behave sporadically.\nIt is not clear why humans find implicit coreferences easier than explicit coreferences.\nA conjecture is that implicit coreferences depend directly on the previous turn whereas explicit coreferences may have long distance dependency on the conversation.\n\n\\paragraph{Importance of conversation history.}\nFinally, we examine how important the conversation history is for the dataset. Table \\ref{tab:ablations} presents the results with a varied number of previous turns used as conversation history.\nAll models succeed at leveraging history but only up to a history of one previous turn (except \\sys{PGNet}). It is surprising that using more turns could decrease the performance.\n\nWe also perform an experiment on humans to measure the trade-off between their performance and the number of previous turns shown.\nBased on the heuristic that short questions likely depend on the conversation history, we sample 300 one or two word questions, and collect answers to these varying the number of previous turns shown.\n\nWhen we do not show any history, human performance drops to 19.9 F1 as opposed to 86.4 F1 when full history is shown. When the previous question and answer is shown, their performance boosts to 79.8 F1, suggesting that the previous turn plays an important role in making sense of the current question. If the last two questions and answers are shown, they reach up to 85.3 F1, almost close to the performance when the full history is shown. This suggests that most questions in a conversation have a limited dependency within a bound of two turns.\n\n\\begin{table}[!t]\n\\centering\n\\begin{tabular}{ccccc}\n\\toprule\n\\tf{history size} & \\sys{seq2seq} & \\sys{PGNet} & \\sys{SAR} & \\sys{Hybrid} \\\\\n\\midrule\n0 & 24.0 & 41.3 & 50.4 & 61.5 \\\\\n1 & 27.5 & 43.9 & 54.7 &  66.2 \\\\\n2 & 21.4 & 44.6 & 54.6 & 66.0 \\\\\nall & 21.0 &  45.4 & 52.3 & 64.3 \\\\\n\\bottomrule\n\\end{tabular}\n\\longcaption{\\sys{CoQA} results on the development set with different history sizes}{\\label{tab:ablations} Results on the development set with different history sizes. History size indicates the number of previous turns prepended to the current question. Each turn contains a question and its answer. \\sys{SAR}: \\sys{Stanford Attentive Reader}. }\n\\end{table}\n"
  },
  {
    "path": "chapters/coqa/intro.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n% \\section{Introduction}\n\nIn the last chapter, we discussed how we built a general-knowledge question-answering system from neural reading comprehension. However, most current QA systems are limited to answering isolated questions, i.e., every time we ask a question, the systems return an answer without the ability to consider any context. In this chapter, we set out to tackle another challenging problem \\ti{Conversational Question Answering}, where a machine has to understand a text passage and answer a series of questions that appear in a conversation.\n\nHumans gather information by engaging in conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable them to answer conversational questions. Figure~\\ref{fig:coqa-example} shows a conversation between two humans who are reading a passage, one acting as a questioner and the other as an answerer. In this conversation, every question after the first is dependent on the conversation history. For instance, $Q_5$ \\ti{Who?} is only a single word and is impossible to answer without knowing what has already been said. Posing short questions is an effective human conversation strategy, but such questions are really difficult for machines to parse. Therefore, conversational question answering combines the challenges from both dialogue and reading comprehension.\n\nWe believe that building systems which are able to answer such conversational questions will play a crucial role in our future conversational AI systems. To approach this problem, we need to build effective \\ti{datasets} and conversational QA \\ti{models} and we will describe both of them in this chapter.\n\n\\begin{figure}[!t]\n\\begin{tabular}{p{0.9\\columnwidth}}\n\\midrule\nJessica went to sit in her rocking chair. Today was her birthday and she was turning 80. Her granddaughter Annie was coming over in the afternoon and Jessica was very excited to see her. Her daughter Melanie and Melanie's husband Josh were coming as well. Jessica had $\\ldots$\\\\\n\\\\\n$Q_1$: Who had a birthday? \\\\\n$A_1$: Jessica \\\\\n$R_1$: Jessica went to sit in her rocking chair. Today was her birthday and she was turning 80.\\\\\n\\vspace{0em}\n$Q_2$: How old would she be?\\\\\n$A_2$: 80 \\\\\n$R_2$: she was turning 80 \\\\\n\\vspace{0em}\n$Q_3$: Did she plan to have any visitors?\\\\\n$A_3$: Yes \\\\\n$R_3$: Her granddaughter Annie was coming over \\\\\n\\vspace{0em}\n$Q_4$: How many?\\\\\n$A_4$: Three \\\\\n$R_4$: Her granddaughter Annie was coming over in the afternoon and Jessica was very excited to see her. Her daughter Melanie and Melanie's husband Josh were coming as well. \\\\\n\\vspace{0em}\n$Q_5$: Who?\\\\\n$A_5$: Annie, Melanie and Josh \\\\\n$R_5$: Her granddaughter Annie was coming over in the afternoon and Jessica was very excited to see her. Her daughter Melanie and Melanie's husband Josh were coming as well.\\\\\n\\bottomrule\n\\end{tabular}\n\\longcaption{A conversation from \\sys{CoQA}}{\\label{fig:coqa-example} A conversation from our \\sys{CoQA} dataset. Each turn contains a question ($Q_i$), an answer ($A_i$) and a rationale ($R_i$) that supports the answer.}\n\\end{figure}\n\nThis chapter is organized as follows. We first discuss related work in Section~\\ref{sec:coqa-rw} and then we introduce \\sys{CoQA}~\\cite{reddy2019coqa} in Section~\\ref{sec:coqa-dataset}, a \\textbf{Co}nversational \\textbf{Q}uestion \\textbf{A}nswering challenge for measuring the ability of machines to participate in a question-answering style conversation.\\footnote{We launch \\sys{CoQA} as a challenge to the community at \\href{https://stanfordnlp.github.io/coqa/}{https://stanfordnlp.github.io/coqa/}.} Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. We define our task and describe the dataset collection process. We also analyze the dataset in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets, e.g., coreference and pragmatic reasoning. Next we describe several strong conversational and reading comprehension models we built for \\sys{CoQA} in Section~\\ref{sec:coqa-models} and present experimental results in Section~\\ref{sec:coqa-experiments}. Finally, we discuss future work of conversational question answering (Section~\\ref{sec:coqa-future}).\n"
  },
  {
    "path": "chapters/coqa/models.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Models}\n\\label{sec:coqa-models}\n\nGiven a passage $p$, the conversation history \\{$q_1, a_1, \\ldots q_{i-1}, a_{i-1}$\\} and a question $q_i$, the task is to predict the answer ${a_i}$. Our task can be modeled as either a conversational response generation problem or a reading comprehension problem. We evaluate strong baselines from each class of models and a combination of the two on \\sys{CoQA}.\n\n\\subsection{Conversational Models}\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[height=9.5cm]{img/coqa_pgnet.pdf}\n\\end{center}\n\\longcaption{The pointer-generator network used for conversational question answering}{\\label{fig:coqa-pgnet} The pointer-generator network used for conversational question answering. The figure is adapted from \\newcite{see2017get}.}\n\\end{figure}\n\nThe basic goal of conversational models is to predict the next utterance based on its conversation history. Sequence-to-sequence (seq2seq) models~\\cite{sutskever2014sequence} have shown promising results for generating conversational responses \\cite{vinyals2015neural,li2016diversity,zhang2018personalizing}. Motivated by their success, we use a standard sequence-to-sequence model with an attention mechanism for generating answers. We append the passage, the conversation history (the question/answer pairs in the last $n$ turns) and the current question as, $p\\; \\mathrm{<}q\\mathrm{>}\\; q_{i-n} \\;\\mathrm{<}a\\mathrm{>}\\; a_{i-n}\\; \\ldots$ $\\mathrm{<}q\\mathrm{>}\\; q_{i-1} \\;\\mathrm{<}a\\mathrm{>}\\; a_{i-1}\\;$  $\\mathrm{<}q\\mathrm{>}\\;q_i$, and feed it into a bidirectional LSTM encoder, where $\\mathrm{<}q\\mathrm{>}$ and $\\mathrm{<}a\\mathrm{>}$ are special tokens used as delimiters. We then generate the answer using a LSTM decoder which attends to the encoder states.\n\nMoreover, as the answer words are likely to appear in the original passage, we adopt a copy mechanism in the decoder proposed for summarization tasks \\cite{gu2016incorporating,see2017get}, which allows to (optionally) copy a word from the passage and the conversation history. We call this model the Pointer-Generator network~\\cite{see2017get}, \\sys{PGNet}. Figure~\\ref{fig:coqa-pgnet} illustrates a full model of \\sys{PGNet}. Formally, we denote the encoder hidden vectors by $\\{\\tilde{\\mf{h}}_i\\}$, the decoder state at timestep $t$ by $\\mf{h}_t$ and the input vector by $\\mf{x}_t$, an attention function is computed based on $\\{\\tilde{\\mf{h}}_i\\}$ and $\\mf{h}_t$  as $\\alpha_i$ (Equation~\\ref{eq:attention}) and the context vector is computed as $\\mf{c} = \\sum_{i}{\\alpha_i \\tilde{\\mf{h}}_i}$ (Equation~\\ref{eq:context-vector}).\n\nFor a copy mechanism, it first computes the \\ti{generation probability} $p_{\\text{gen}} \\in [0, 1]$ which controls the probability that it generates a word from the full vocabulary $\\mathcal{V}$ (rather than copying a word) as:\n\n\\begin{equation}\n    p_{\\text{gen}} = \\sigma\\left({\\mf{w}^{(c)}}^{\\intercal}\\mf{c} + {\\mf{w}^{(x)}}^{\\intercal}\\mf{x}_t + {\\mf{w}^{(h)}}^{\\intercal}\\mf{h}_t + b\\right).\n\\end{equation}\n\nThe final probability distribution of generating word $w$ is computed as:\n\\begin{equation}\n    P(w) = p_{\\text{gen}}P_{\\text{vocab}}(w) + (1 - p_{\\text{gen}})\\sum_{i: w_i = w}\\alpha_i,\n\\end{equation}\nwhere $P_{\\text{vocab}}(w)$ is the original probability distribution (computed based on $\\mf{c}$ and $\\mf{h}_t$) and $\\{w_i\\}$ refers to all the words in the passage and the dialogue history. For more details, we refer readers to \\cite{see2017get}.\n\n\n\\subsection{Reading Comprehension Models}\nThe second class of models we evaluate is the neural reading comprehension models. In particular, the models for the span prediction problems can't be applied directly, as a large portion of the \\sys{CoQA} questions don't have a single span in the passage as their answer, e.g., $Q_3$, $Q_4$ and $Q_5$ in Figure~\\ref{fig:coqa-example}. Therefore, we modified the \\sys{Stanford Attentive Reader} model we described in Section~\\ref{sec:sar} for this problem. Since the model requires text spans as answers during training, we select the span which has the highest lexical overlap (F1 score) with the original answer as the gold answer. If the answer appears multiple times in the story we use the rationale to find the correct one. If any answer word does not appear in the passage, we fall back to an additional \\textit{unknown} token as the answer (about 17\\%). We prepend each question with its past questions and answers to account for conversation history, similar to the conversational models.\n\n\\subsection{A Hybrid Model}\nThe last model we build is a \\ti{hybrid} model, which combines the advantages of the aforementioned two models. The reading comprehension models can predict a text span as an answer, while they can't produce answers that do not overlap with the passage. Therefore,  we combine \\sys{Stanford Attentive Reader} with \\sys{PGNet} to address this problem since \\sys{PGNet} can generate free-form answers effectively. In this hybrid model, we use the reading comprehension model to first point to the answer evidence in text, and \\sys{PGNet} naturalizes the evidence into the final answer. For example, for Q$_5$ in Figure~\\ref{fig:coqa-example}, we expect that the reading comprehension model first predicts the rationale R$_5$ \\ti{Her granddaughter Annie was coming over in the afternoon and Jessica was very excited to see her. Her daughter Melanie and Melanie’s husband Josh were coming as well.}, and then \\sys{PGNet} generates A$_5$ \\ti{Annie, Melanie and Josh} from R$_5$.\n\nWe make a few changes to both models based on empirical performance. For the \\sys{Stanford Attentive Reader} model, we only use rationales as answers for the questions with an non-extractive answer. For \\sys{PGNet}, we only provide current question and span predictions from the the \\sys{Stanford Attentive Reader} model as input to the encoder. During training, we feed the oracle spans into \\sys{PGNet}.\n"
  },
  {
    "path": "chapters/coqa/related_work.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Related Work}\n\\label{sec:coqa-rw}\n\nConversational question answering is directly related to \\tf{dialogue}. Building conversational agents, or dialogue systems to converse with humans in natural language is one of the major goals of natural language understanding. The two most common classes of dialogue systems are: \\ti{task-oriented}, and \\ti{chit-chat} (or \\ti{chatbot}) dialogue agents.  Task-oriented dialogue systems are designed for a particular task and set up to have short conversations (e.g., booking a flight or making a restaurant reservation). They are evaluated based on task-completion rate or time to task completion. In contrast, chit-chat dialogue systems are designed for extended, casual conversations, without a specific goal. Usually, the longer the user engagement and interaction, the better these systems are.\n\nAnswering questions is also a core task of dialogue systems, because one of the most common needs for humans to interact with dialogue agents is to seek information and ask questions of various topics. QA-based dialogue techniques have been developed extensively in automated personal assistant systems such as Amazon's \\sys{Alexa}, Apple's \\sys{Siri} or \\sys{Google Assistant}, either based on structured knowledge bases, or unstructured text collections. Modern dialogue systems are mostly built on top of deep neural networks. For a comprehensive survey of neural approaches to different types of dialogue systems, we refer readers to \\cite{gao2018neural}.\n\n\\begin{figure}[!t]\n    \\center\n    \\includegraphics[scale=0.45]{img/other_coqa_tasks.pdf}\n    \\longcaption{Other conversational question answering tasks on images and KBs}{\\label{fig:other-coqa-tasks}Other conversational question answering tasks on images (left) and KBs (right). Images courtesy: \\cite{das2017visual} and \\cite{guo2018dialog} with modifications.}\n\\end{figure}\n\nOur work is closely related to the \\ti{Visual Dialog} task of \\cite{das2017visual} and the \\ti{Complex Sequential Question Answering} task of \\cite{saha2018complex}, which perform conversational question answering on images and a knowledge graph (e.g. \\sys{WikiData}) respectively, with the latter focusing on questions obtained by paraphrasing templates. Figure~\\ref{fig:other-coqa-tasks} demonstrates an example from each task. We focus on conversations over a passage of text, which requires the ability of reading comprehension.\n\nAnother related line of research is \\ti{sequential question answering}~\\cite{iyyer2017search,talmor2018web}, in which a complex question is decomposed into a sequence of simpler questions. For example, a question \\ti{What super hero from Earth appeared most recently?} can be decomposed into the following three questions: 1) \\ti{Who are all of the super heroes?}, 2) \\ti{Which of them come from Earth?}, and 3) \\ti{Of those, who appeared most recently?}. Therefore, their focus is how to answer a complex question via sequential question answering, while we are more interested in a natural conversation of a variety of topics while the questions can be dependent on the dialogue history.\n"
  },
  {
    "path": "chapters/openqa/evaluation.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Evaluation}\n\\label{sec:drqa-eval}\n\nWe have all the basic elements of our \\sys{DrQA} systems and let's take a look at the evaluation.\n\n\\subsection{Question Answering Datasets}\nThe first question is which question answering datasets we should evaluate on. As we discussed, \\sys{SQuAD} is one of the largest general purpose QA datasets currently available for question answering but it is very different from open-domain QA setting. We propose to train and evaluate our system on other datasets developed for open-domain QA that have been constructed in different ways. We hence adopt the following three datasets:\n\n\\paragraph{TREC} This dataset is based on the benchmarks from the TREC QA tasks that have been curated by \\newcite{baudivs2015modeling}. We use the large version, which contains a total of 2,180 questions extracted from the datasets from TREC 1999, 2000, 2001 and 2002.\\footnote{This dataset is available at \\url{https://github.com/brmson/dataset-factoid-curated}.} Note that for this dataset, all the answers are written in regular expressions, for example, the answer is \\texttt{Sept(ember)?|Feb(ruary)?} to the question \\ti{When is Fashion week in NYC?}, so answers \\ti{Sept}, \\ti{September}, \\ti{Feb}, \\ti{February} are all judged as correct.\n\n\\paragraph{WebQuestions} Introduced in \\newcite{berant2013semantic}, this dataset is built to answer questions from the Freebase KB. It was created by crawling questions through the \\sys{Google Suggest} API, and then obtaining answers using Amazon Mechanical Turk. We convert each answer to text by using entity names so that the dataset does not reference Freebase IDs and is purely made of plain text question-answer pairs.\n\n\\paragraph{WikiMovies} This dataset, introduced in \\newcite{miller2016key}, contains 96k question-answer pairs in the domain of movies. Originally created from the \\sys{OMDb} and \\sys{MovieLens} databases, the examples are built such that they can also be answered by using a subset of Wikipedia as the knowledge source (the title and the first section of articles from the movie domain).\n\nWe would like to emphasize that these datasets are not necessarily collected in the context of answering from Wikipedia. The \\sys{TREC} dataset was designed for text-based question answering (the primary TREC document sets consist mostly of newswire articles), while \\sys{WebQuestions} and \\sys{WikiMovies} were mainly collected for knowledge-based question answering. We put all these resources in one unified framework, and test how well our system can answer all the questions --- hoping that it can reflect the performance of general-knowledge QA.\n\nTable~\\ref{tab:qa-data-stats} and Figure~\\ref{fig:qa-data-stats} give detailed statistics of these QA datasets. As we can see that, the distribution of \\sys{SQuAD} examples is quite different from that of the other QA datasets. Due to the construction method, \\sys{SQuAD} has longer questions (10.4 tokens vs 6.7--7.5 tokens on average). Also, all these datasets have short answers (although the answers in \\sys{SQuAD} are slightly longer) and most of them are factoid.\n\nNote that there are might be multiple answers for many of the questions in these QA datasets (see the \\ti{\\# answers} column of Table~\\ref{tab:qa-data-stats}). For example, there are two valid answers: \\ti{English} and \\ti{Urdu} to the question \\ti{What language do people speak in Pakistan?} on \\sys{WebQuestions}. As our system is designed to return one answer, our evaluation considers the prediction as correct if it gives any of the gold answers.\n\n\\begin{figure}[h]\n\\center\n\\includegraphics[scale=0.7]{img/qa_stat.png}\n\\longcaption{The average length of questions and answers in our QA datasets}{\\label{fig:qa-data-stats}The average length of questions and answers in our QA datasets. All the statistics are computed based on the training sets.}\n\\end{figure}\n\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{l | r r | r | r}\n\\toprule\n\\tf{Dataset} & \\tf{\\# Train} & \\tf{\\# DS Train} & \\tf{\\# Test} & \\tf{\\# answers} \\\\\n\\midrule\n\\sys{SQuAD} & 87,599 & 71,231 & N/A & 1.0  \\\\\n\\midrule\n\\sys{TREC} &  1,486$^{\\dagger}$ & 3,464 & 694 & 3.2\\footnote{As all the answer strings are regex expressions, it is difficult to estimate \\# of answers. We only simply list the number of alternation symbols \\texttt{|} in the answer.} \\\\\n\\sys{WebQuestions} &  3,778$^{\\dagger}$ & 4,602 & 2,032 & 2.4 \\\\\n\\sys{WikiMovies} &  96,185$^{\\dagger}$ & 36,301 & 9,952 & 1.9 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\longcaption{Statistics of the QA datasets used for \\sys{DrQA}.}{\\label{tab:qa-data-stats} Statistics of the QA datasets used for \\sys{DrQA}. DS Train: distantly supervised training data. $^{\\dagger}$: These training sets are not used as is because no passage is associated with each question.}\n\\end{table}\n\n\n\n\n\\subsection{Implementation Details}\n\n\\subsubsection{Processing Wikipedia}\nWe use the 2016-12-21 dump\\footnote{\\url{https://dumps.wikimedia.org/enwiki/latest}} of English Wikipedia for all of our full-scale experiments as the knowledge source used to answer questions. For each page, only the plain text is extracted and all structured data sections such as lists and figures are stripped.\\footnote{We use the WikiExtractor script: \\url{https://github.com/attardi/wikiextractor}.} After discarding internal disambiguation, list, index, and outline pages, we retain 5,075,182 articles consisting of 9,008,962 unique uncased token types.\n\n\n\\subsubsection{Distantly-supervised data}\nWe use the following process for each question-answer pair from the training portion of each dataset to build our distantly-supervised training examples. First, we run our \\sys{Document Retriever} on the question to retrieve the top 5 Wikipedia articles. All paragraphs from those articles without an exact match of the known answer are directly discarded. All paragraphs  shorter than 25 or longer than 1500  characters are also filtered out. If any named entities are detected in the question, we remove any paragraph that does not contain them at all. For every remaining paragraph in each retrieved page, we score all positions that match an answer using unigram and bigram overlap between the question and a 20 token window, keeping up to the top 5 paragraphs with the highest overlaps. If there is no paragraph with non-zero overlap, the example is discarded; otherwise we add each found pair to our DS training dataset. Some examples are shown in Figure~\\ref{fig:ds_examples} and the number of distantly supervised examples we created for training are given in Table~\\ref{tab:qa-data-stats} (column \\ti{\\# DS Train}).\n\n\n\\begin{figure}\n\\begin{center}\n\\small\n\\begin{tabularx}{\\textwidth}{l|p{4.5cm}|p{7cm}}\n\\hline\n\\bf Dataset & \\bf Example & \\bf Article / Paragraph \\\\\n\\hline\n\\sys{TREC} & {\\bf Q}: What U.S. state's motto is ``Live free or Die''? \\newline {\\bf A}: New Hampshire & {\\bf Article}: Live Free or Die \\newline {\\bf Paragraph}: ``Live Free or Die'' is the official motto of the U.S. state of \\hl{New Hampshire}, adopted by the state in 1945. It is possibly the best-known of all state mottos, partly because it conveys an assertive independence historically found in American political philosophy and partly because of its contrast to the milder sentiments found in other state mottos.\\\\\n\\hline\n\\sys{WebQuestions}  & {\\bf Q}: What part of the atom did Chadwick discover?$^\\dagger$  \\newline {\\bf A}: neutron  & {\\bf Article}: Atom \\newline {\\bf Paragraph}: ... The atomic mass of these isotopes varied by integer amounts, called the whole number rule. The explanation for these different isotopes awaited the discovery of the \\hl{neutron}, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932.  ... \\\\\n\\hline\n\\sys{WikiMovies} & {\\bf Q}: Who wrote the film Gigli? \\newline {\\bf A}: Martin Brest &  {\\bf Article}: Gigli \\newline {\\bf Paragraph}: Gigli is a 2003 American romantic comedy film written and directed by \\hl{Martin Brest} and starring Ben Affleck, Jennifer Lopez, Justin Bartha, Al Pacino, Christopher Walken, and Lainie Kazan. \\\\\n\\hline\n\\end{tabularx}\n\\end{center}\n\\longcaption{Examples of distantly-supervised examples from QA datasets}{\\label{fig:ds_examples}Example training data from each QA dataset. In each case we show an associated paragraph where distant supervision (DS) correctly identified the answer within it, which is highlighted.}\n\\end{figure}\n\n\n\\Subsection{retrieval-eval}{Document Retriever Performance}\nWe first examine the performance of our retrieval module on all the QA datasets. Table~\\ref{tab:ir-res} compares the performance of the two approaches described in Section~\\ref{sec:doc-retriever} with that of the Wikipedia Search Engine\\footnote{We use the Wikipedia Search API \\url{https://www.mediawiki.org/wiki/API:Search}.} for the task of finding articles that contain the answer given a question.\n\nSpecifically, we compute the ratio of questions for which the text span of any of their associated answers appear in at least one the top 5 relevant pages returned by each system.\n\nResults on all datasets indicate that our simple approach outperforms Wikipedia Search, especially with bigram hashing. We also compare doing retrieval with Okapi BM25 or by using cosine distance in the word embeddings space (by encoding questions and articles as bag-of-embeddings), both of which we find performed worse.\n\n\\begin{table}[t]\n\\begin{center}\n\\normalsize\n\\begin{tabular}{l r r r}\n\\toprule\n\\bf Dataset &  \\sys{Wiki. Search} & \\multicolumn{2}{c}{\\sys{Document Retriever}} \\\\\n&    & unigram &  bigram  \\\\\n\\midrule\n% SQuAD & 62.7 &  76.1 & \\bf 77.8 \\\\\n% %\\curq  & 82.8 & 84.2 & \\bf 85.6 \\\\\n\\sys{TREC} & 81.0 & 85.2 & \\bf 86.0 \\\\\n\\sys{WebQuestions} &    73.7 & \\bf 75.5 & 74.4 \\\\\n\\sys{WikiMovies} & 61.7 &  54.4 &  \\bf 70.3 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\longcaption{Document retrieval results}{\\label{tab:ir-res} Document retrieval results. \\% of questions for which the answer segment appears in one of the top 5 pages returned by the method. }\n\\end{table}\n\n\n\\subsection{Final Results}\n\\label{sec:drqa-final-results}\nFinally, we assess the performance of our full system \\sys{DrQA} for answering open-domain questions using all these datasets. We compare three versions of \\sys{DrQA} which evaluate the impact of using distant supervision and multitask learning across the training sources provided to \\sys{Document Reader} (\\sys{Document Retriever} remains the same for each case):\n\n\\begin{itemize}\n\\item\n  \\sys{SQuAD}: A single \\sys{Document Reader} model is trained on the \\sys{SQuAD} training set only and used on all evaluation sets. We used the model that we described in Section~\\ref{sec:drqa} (the F1 score is 79.0\\% on the test set of \\sys{SQuAD}).\n\\item\n  Fine-tune (DS): A \\sys{Document Reader} model is pre-trained on \\sys{SQuAD} and then fine-tuned for each dataset independently using its distant supervision (DS) training set.\n\\item\n  Multitask (DS): A single \\sys{Document Reader} model is jointly trained on the SQuAD training set and all the distantly-supervised examples.\n\\end{itemize}\n\nFor the full Wikipedia setting we use a streamlined model that does not use the \\sys{CoreNLP} parsed $f_{token}$ features or lemmas for $f_{exact\\_match}$. We find that while these help for more exact paragraph reading in \\sys{SQuAD}, they don't improve results in the full setting. Additionally, \\sys{WebQuestions} and \\sys{WikiMovies} provide a list of candidate answers (1.6 million \\sys{Freebase} entity strings for \\sys{WebQuestions} and 76k movie-related entities for \\sys{WikiMovies}) and we restrict that the answer span must be in these lists during prediction.\n\nTable~\\ref{tab:drqa-full-results} presents the results. We only consider top-1, exact-match accuracy, which is the most restricted and challenging setting. In the original paper \\cite{chen2017reading}, we also evaluated the question/answer pairs in SQuAD. We omit them here because that at least a third of these questions are context-dependent and are not really suitable for open QA.\n\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{l c ccc cc}\n\\toprule\n\\textbf{Dataset} &  \\tf{YodaQA} &  \\multicolumn{3}{c}{\\tf{DrQA}} & \\multicolumn{2}{c}{\\tf{DrQA*}} \\\\\n&   &  {SQuAD} &  {FT} & {MT} & {SQuAD} & {FT} \\\\\n\\midrule\n\\sys{TREC} & 31.3 & 19.7 & 25.7 & 25.4 &  21.3 &  28.8 \\\\\n\\sys{WebQuestions} & 38.9 & 11.8 & 19.5 & 20.7 & 14.2 & 24.3 \\\\\n\\sys{WikiMovies} & N/A & 24.5 & 34.3 & 36.5 & 31.9 & 46.0 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\longcaption{Final performance of DrQA}{\\label{tab:drqa-full-results} Full Wikipedia results. Top-1 exact-match accuracy (\\%). \\tf{FT}: Fine-tune (DS). \\tf{MT}: Multitask (DS). The \\sys{DrQA*} results are taken from \\newcite{raison2018weaver}.}\n\\end{table}\n\nDespite the difficulty of the task compared to the reading comprehension task (where you are given the right paragraph) and unconstrained QA (using redundant resources), \\sys{DrQA} still provides reasonable performance across all four datasets.\n\nWe are interested in a single, full system that can answer any question using Wikipedia. The single model trained only on \\sys{SQuAD} is outperformed on all the datasets by the multitask model that uses distant supervision. However, performance when training on SQuAD alone is not far behind, indicating that task transfer is occurring. The majority of the improvement from \\sys{SQuAD} to Multitask (DS) learning, however, is likely not from task transfer, as fine-tuning on each dataset alone using DS also gives improvements, showing that is the introduction of extra data in the same domain that helps. Nevertheless, the best single model that we can find is our overall goal, and that is the Multitask (DS) system.\n\nWe compare our system to \\sys{YodaQA} \\cite{baudivs2015yodaqa} (an unconstrained QA system using redundant resources), giving results which were previously reported on \\sys{TREC} and \\sys{WebQuestions}.\\footnote{The results are extracted from \\href{https://github.com/brmson/yodaqa/wiki/Benchmarks}{https://github.com/brmson/yodaqa/wiki/Benchmarks}.} Despite the increased difficulty of our task, it is reassuring that our performance is not too far behind on \\sys{TREC} (31.3 vs 25.4). The gap is slightly bigger on \\sys{WebQuestions}, likely because this dataset was created from the specific structure of \\sys{Freebase} which \\sys{YodaQA} uses directly.\n\nWe also include the results from an enhancement of our model named \\sys{DrQA*}, presented in \\newcite{raison2018weaver}. The biggest change is that this reading comprehension model is trained and evaluated directly on the Wikipedia articles instead of paragraphs (documents are on average 40 times larger than individual paragraphs). As we can see, the performance has been improved consistently on all the datasets, and the gap from \\sys{YodaQA} is hence further reduced.\n\n\\clearpage\n\\begin{longtable}{l l p{12cm}}\n\\hline\n(a) & \\tf{Question} & What is question answering? \\\\\n& \\tf{Answer} & a computer science discipline within the fields of information retrieval and natural language processing \\\\\n& \\tf{Wiki. article} & \\href{https://en.wikipedia.org/wiki/Question_answering}{Question Answering} \\\\\n& \\tf{Passage} & {\\small Question Answering (QA) is \\hl{a computer science discipline within the fields of information retrieval and natural language processing} (NLP), which is concerned with building systems that automatically answer questions posed by humans in a natural language.} \\\\\n\\hline\n(b) & \\tf{Question} & Which state is Stanford University located in? \\\\\n& \\tf{Answer} & California \\\\\n& \\tf{Wiki. article} & \\href{https://en.wikipedia.org/wiki/Stanford_Memorial_Church}{Stanford Memorial Church} \\\\\n& \\tf{Passage} & {\\small Stanford Memorial Church (also referred to informally as MemChu) is located on the Main Quad at the center of the Stanford University campus in Stanford, \\hl{California}, United States. It was built during the American Renaissance by Jane Stanford as a memorial to her husband Leland. Designed by architect Charles A. Coolidge, a protégé of Henry Hobson Richardson, the church has been called \"the University's architectural crown jewel\".} \\\\\n\\hline\n(c) & \\tf{Question} & Who invented LSTM? \\\\\n& \\tf{Answer} & Sepp Hochreiter \\& J\\\"urgen Schmidhuber \\\\\n& \\tf{Wiki. article}  & \\href{https://en.wikipedia.org/wiki/Deep_learning}{Deep Learning} \\\\\n& \\tf{Passage} & {\\small Today, however, many aspects of speech recognition have been taken over by a deep learning method called Long short-term memory (LSTM), a recurrent neural network published by \\hl{Sepp Hochreiter \\& J\\\"urgen Schmidhuber} in 1997. LSTM RNNs avoid the vanishing gradient problem and can learn ``Very Deep Learning'' tasks that require memories of events that happened thousands of discrete time steps ago, which is important for speech. In 2003, LSTM started} \\\\\n& & {\\small  to become competitive with traditional speech recognizers on certain tasks. Later it was combined with CTC in stacks of LSTM RNNs. In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49\\% through CTC-trained LSTM, which is now available through Google Voice to all smartphone users, and has become a show case of deep learning.} \\\\\n\\hline\n(d) & \\tf{Question} & What is the answer to life, the universe, and everything? \\\\\n& \\tf{Answer} & 42 \\\\\n& \\tf{Wiki. article} & \\href{https://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy}{Phrases from The Hitchhiker's Guide to the Galaxy} \\\\\n& \\tf{Passage} & {\\small The number 42 and the phrase, \"Life, the universe, and everything\" have attained cult status on the Internet. \"Life, the universe, and everything\" is a common name for the off-topic section of an Internet forum and the phrase is invoked in similar ways to mean \"anything at all\". Many chatbots, when asked about the meaning of life, will answer \"42\". Several online calculators are also programmed with the Question. Google Calculator will give the result to \"the answer to life the universe and everything\" as 42, as will Wolfram's Computational Knowledge Engine. Similarly, DuckDuckGo also gives the result of \"the answer to the ultimate question of life, the universe and everything\" as \\hl{42}. In the online community Second Life, there is a section on a sim called \"42nd Life.\" It is devoted to this concept in the book series, and several attempts at recreating Milliways, the Restaurant at the End of the Universe, were made.} \\\\\n\\hline\n\\longcaption{Sample predictions of our \\sys{DrQA} system}{\\label{tab:drqa-output}Sample predictions of our \\sys{DrQA} system.}\n\\end{longtable}\n\nLastly, our \\sys{DrQA} system is open-sourced at \\href{https://github.com/facebookresearch/DrQA}{https://github.com/facebookresearch/DrQA} (the Multitask (DS) system was deployed). Table~\\ref{tab:drqa-output} lists some sample predictions that we tried by ourselves (not in any of these datasets). As is seen, our system is able to return a precise answer to all these factoid questions and answering some of these questions is not trivial:\n\n\\begin{enumerate}[(a)]\n    \\item It is not trivial to identify that \\ti{a computer science discipline within the fields of information retrieval and natural language processing} is the complete noun phrase and the correct answer although the question is pretty simple.\n    \\item Our system finds the answer in another Wikipedia article \\ti{Stanford Memorial Church}, and gives the exactly correct answer \\ti{California} as the \\ti{state} (instead of \\ti{Stanford} or \\ti{United States}).\n    \\item To get the correct answer, the system needs to understand the syntactic structure of the question and the context \\ti{Who invented LSTM?} and \\ti{a deep learning method called Long short-term memory (LSTM), a recurrent neural network published by Sepp Hochreiter \\& J\\\"urgen Schmidhuber in 1997.}\n\\end{enumerate}\n\nConceptually, our system is simple and elegant, and doesn't rely on any additional linguistic analysis or external or hand-coded resources (e.g., dictionaries). We think this approach holds great promise for a new generation of open-domain question answering systems. In the next section, we discuss current limitations and possible directions for further improvement.\n"
  },
  {
    "path": "chapters/openqa/future.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Future Work}\n\\label{sec:openqa-future}\n\nOur \\sys{DrQA} demonstrates that combining information retrieval and neural reading comprehension is an effective approach for open-domain question answering. We hope that our work takes the first step in this research direction. However, our system is still at an early stage and many implementation details can be further improved.\n\nWe think the following research directions will (greatly) improve our \\sys{DrQA} system and should be pursued as future work. Indeed, some of the ideas have already been implemented in the following year after we published our \\sys{DrQA} system and we will also describe them in detail in this section.\n\n\\paragraph{Aggregating evidence from multiple paragraphs.} Our system adopted the most simple and straightforward approach: we took the argmax over the unnormalized scores of all the retrieved passages. This is not ideal because 1) It implies that each passage must contain the correct answer (as \\sys{SQuAD} examples) so our system will output one and only one answer for each passage. This is indeed not the case for most retrieved passages. 2) Our current training paradigm doesn't guarantee that the scores in different passages are comparable which causes a gap between the training and the evaluation process.\n\nTraining on full Wikipedia articles is a solution to alleviate this problem (see the \\sys{DrQA*} results in Table~\\ref{tab:drqa-full-results}), however, these models are running slowly and difficult to parallelize. \\newcite{clark2018simple} proposed to perform multi-paragraph training with modified training objectives, where the span start and end scores are normalized across all paragraphs sampled from the same context. They demonstrated that it works much better than training on individual passages independently. Similarly, \\newcite{wang2018r} and \\newcite{wang2018evidence} proposed to train an explicit passage re-ranking component on the retrieved articles: \\newcite{wang2018r} implemented this in a reinforcement learning framework so the re-ranker component and answer extraction components are jointly trained; \\newcite{wang2018evidence} proposed a strength-based re-ranker and a coverage-based re-ranker which aggregate evidence from multiple paragraphs more directly.\n\n\\paragraph{Using more and better training data.} The second aspect which makes a big impact is the training data. Our \\sys{DrQA} system only collected 44k distantly-supervised training examples from \\sys{TREC}, \\sys{WebQuestions} and \\sys{WikiMovies}, and we demonstrated their effectiveness in Section~\\ref{sec:drqa-final-results}. The system should be further improved if we can leverage more supervised training data --- from either \\sys{TriviaQA}~\\cite{joshi2017triviaqa} or generating more data from other QA resources. Moreover, these distantly supervised examples inevitably suffer from the noise problem (i.e., the paragraph doesn't imply the answer to the question even if the answer is contained) and \\newcite{lin2018denoising} proposed a solution to de-noise these distantly supervised examples and demonstrated gains in an evaluation.\n\nWe also believe that adding negative examples should improve the performance of our system substantially. We can either create some negative examples using our full pipeline: we can leverage the \\sys{Document Retrieval} module to help us find relevant paragraphs while they don't contain the correct answer. We can also incorporate existing resources such as \\sys{SQuAD 2.0}~\\cite{rajpurkar2018know} into our training process, which contains curated, high-quality negative examples.\n\n\\paragraph{Making the \\sys{Document Retriever} trainable.} A third promising direction that has not been fully studied yet is to employ a machine learning approach for the \\sys{Document Retriever} module. Our system adopted a straightforward, non-machine learning model and further improvement on the retrieval performance (Table~\\ref{tab:ir-res}) should lead to an improvement on the full system. A training corpus for the \\sys{Document Retriever} component can be collected either from other resources or from the QA data (e.g., using whether an article contains the answer to the question as a label). Joint training of the \\sys{Document Retrieval} and the \\sys{Document Reader} component will be a very desirable and promising direction for future work.\n\nRelated to this, \\newcite{clark2018simple} also built an open-domain question answering system\\footnote{The demo is at \\href{https://documentqa.allenai.org}{https://documentqa.allenai.org}.} on top of a search engine (Bing web search) and demonstrated superior performance compared to ours. We think the results are not directly comparable and the two approaches (using a commercial search engine or building an independent IR component) both have pros and cons. Building our own IR component gets rid of an existing API call and can run faster and easily adapt to new domains.\n\n\\paragraph{Better \\sys{Document Reader} module.} For our \\sys{DrQA} system, we used the neural reading comprehension model which achieved F1 of 79.0\\% on the test set of \\sys{SQuAD 1.1}. With the recent development of neural reading comprehension models (Section~\\ref{sec:advances}), we are confident that if we replace our current \\sys{Document Reader} model with the state-of-the-art models~\\cite{devlin2018bert}, the performance of our full system will be improved as well.\n\n\\paragraph{More analysis is needed.} Another important missing work is to conduct an in-depth analysis of our current systems: to understand which questions they can answer, and which they can't. We think it is important to compare our modern systems to the earlier TREC QA results under the same conditions. It will help us understand where we make genuine progress and what techniques we can still use from the pre-deep learning era, to build better question answering systems in the future.\n\nConcurrent to our work, there are several works in a similar spirit to ours, including \\sys{SearchQA}~\\cite{dunn2017searchqa} and \\sys{Quasar-T}~\\cite{dhingra2017quasar}, which both collected relevant documents for trivia or \\sys{Jeopardy!} questions --- the former one retrieved documents from \\sys{ClueWeb} using the \\sys{Lucene} index and the latter used \\sys{Google} search. \\sys{TriviaQA}~\\cite{joshi2017triviaqa} also has an open-domain setting where all the retrieved documents from Bing web search are kept.\nHowever, these datasets still focus on the task of question answering from the retrieved documents, while we are more interested in building an end-to-end QA system.\n"
  },
  {
    "path": "chapters/openqa/intro.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n% \\section{Introduction}\nIn \\sys{Part I}, we described the task of reading comprehension: its formulation and development over recent years, the key components of neural reading comprehension systems, and future research directions. However, it is unclear yet whether reading comprehension is merely used as a task of measuring language understanding abilities, or it can enable any useful applications.  In \\sys{Part II}, we will answer this question and discuss our efforts at building applications which leverage neural reading comprehension as their core component.\n\nIn this chapter, we view \\ti{open domain question answering} as an application of reading comprehension. Open domain question answering has been a long-standing problem in the history of NLP\\@. The goal of open domain question answering is to build automated computer systems which are able to answer any sort of (factoid) questions that humans might ask, based on a large collection of unstructured natural language documents, structured data (e.g., knowledge bases), semi-structured data (e.g., tables) or even other modalities such as images or videos.\n\nWe are the first to test how the neural reading comprehension methods can perform in an open-domain QA framework. We believe that the high performance of these systems can be a key ingredient in building a new generation of open-domain question answering systems, when combined with effective information retrieval techniques.\n\nThis chapter is organized as follows. We first give a high-level overview of open domain question answering and some notable systems in the history (Section~\\ref{sec:openqa-rw}). Next, we introduce an open-domain question answering system that we built called \\sys{DrQA}, designed to answer questions from English Wikipedia (Section~\\ref{sec:drqa}). It essentially combines an information retrieval module and the high-performing neural reading comprehension module that we described in Section~\\ref{sec:sar}. We further talk about how we can improve the system by creating distantly-supervised training examples from the retrieval module. We then present a comprehensive evaluation on multiple question answering benchmarks (Section~\\ref{sec:drqa-eval}). Finally, we discuss current limitations, follow-up work and future directions in Section~\\ref{sec:openqa-future}.\n"
  },
  {
    "path": "chapters/openqa/related_work.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{A Brief History of Open-domain QA}\n\\label{sec:openqa-rw}\n\nQuestion answering was one of the earliest tasks for NLP systems since 1960s. One early system, which prefigures modern text-based question answering systems, was the \\sys{Protosynthex} system of \\cite{simmons1964indexing}. The system first formulated a query based on the content words in the question, retrieved candidate answer sentences based on the frequency-weighted term overlap with the question, and finally performed a dependency parse match to get the final answer. Another notable system \\sys{MURAX} \\cite{kupiec1993murax}, was designed to answer general-knowledge questions over \\sys{Grolier}'s on-line encyclopedia, using shallow linguistic processing and information retrieval (IR) techniques.\n\nThe interest in open domain question answering has increased since 1999, when the QA track was first included as part of the annual TREC competitions\\footnote{\\url{http://trec.nist.gov/data/qamain.html}}. The task was at first defined such that the systems were to retrieve small snippets of text that contained an answer for open-domain questions. It has spurred a wide range of QA systems developed at the time, and the majority of the systems consisted of two stages: an IR system used to select the top $n$ documents or passages which match a query that has been generated from the question, and a window-based word scoring system used to pinpoint likely answers. For more details, readers are referred to \\cite{voorhees1999trec,moldovan2000structure}.\n\nMore recently, with the development of knowledge bases (KBs) such as \\sys{Freebase}~\\cite{bollacker2008freebase} and \\sys{DBpedia}~\\cite{auer2007dbpedia}, many innovations have occurred in the context of question answering from KBs with the creation of resources like \\sys{WebQuestions} \\cite{berant2013semantic} and \\sys{SimpleQuestions} \\cite{bordes2015large} based on \\sys{Freebase}, or on automatically extracted KBs, e.g., OpenIE triples and \\sys{NELL} \\cite{fader2014open}. A lot of progress has been made on knowledge-based question answering and the major approaches are either based on semantic parsing or information extraction techniques~\\cite{yao2014freebase}. However, KBs have inherent limitations (incompleteness and fixed schemas) that motivated researchers to return to the original setting of answering from raw text lately.\n\n\\begin{figure}[t]\n    \\center\n    \\includegraphics[scale=0.25]{img/deepqa.png}\n    \\longcaption{The high-level architecture of IBM's \\sys{DeepQA} used in \\sys{Watson}.}{\\label{fig:watson}The high-level architecture of IBM's \\sys{DeepQA} used in \\sys{Watson}. Image courtesy: \\href{https://en.wikipedia.org/wiki/Watson_(computer)}{https://en.wikipedia.org/wiki/Watson\\_(computer)}.}\n\\end{figure}\n\nThere are also a number of highly developed full pipeline QA approaches using a myriad of resources, including both text collections (Web pages, Wikipedia, newswire articles) and structured knowledge bases (\\sys{Freebase}, \\sys{DBpedia} etc.). A few notable systems include Microsoft's \\sys{AskMSR} \\cite{brill2002askmsr},\nIBM's \\sys{DeepQA} \\cite{ferrucci2010building} and \\sys{YodaQA} \\cite{baudivs2015yodaqa} --- the latter of which is open source and hence reproducible for comparison purposes. \\sys{AskMSR} is a search-engine based QA system that relies on ``data redundancy rather than sophisticated linguistic analyses of either questions or candidate answers''. \\sys{DeepQA} is the most representative modern question answering system and its victory at the TV game-show \\sys{Jeopardy!} in 2011 received a great deal of attention. It is a very sophisticated system that consists of many different pieces in the pipeline and it relies on unstructured information as well as structured data to generate candidate answers or vote over evidence. A high-level architecture is illustrated in Figure~\\ref{fig:watson}. \\sys{YodaQA} is an open source system modeled after \\sys{DeepQA}, similarly combining websites, databases and Wikipedia in particular. Comparing against these methods provides a useful datapoint for an ``upper bound'' benchmark on performance.\n\nFinally, there are other types of question answering problems based on different types of resources, including Web tables~\\cite{pasupat2015compositional}, images~\\cite{antol2015vqa}, diagrams~\\cite{kembhavi2017you} or even videos~\\cite{tapaswi2016movieqa}. We are not going into further details as our work focuses on text-based question answering.\n\nOur \\sys{DrQA} system (Section~\\ref{sec:drqa}) focuses on question answering using Wikipedia as the unique knowledge source, such as one does when looking for answers in an encyclopedia.  QA using Wikipedia as a resource has been explored previously. \\newcite{ryu2014open} perform open-domain QA using a Wikipedia-based knowledge model. They combine article content with multiple other answer matching modules based on different types of semi-structured knowledge such as infoboxes, article structure, category structure, and definitions. Similarly, \\newcite{Ahn2004using} also combine Wikipedia as a text resource with other resources, in this case with information retrieval over other documents. \\newcite{buscaldi2006mining} also mine knowledge from Wikipedia for QA. Instead of using it as a resource for seeking answers to questions, they focus on validating answers returned by their QA system, and use Wikipedia categories for determining a set of patterns that should fit with the expected answer. In our work, we consider the comprehension of text only, and use Wikipedia text documents as the sole resource in order to emphasize the task of reading comprehension. We believe that adding other knowledge sources or information will further improve the performance of our system.\n"
  },
  {
    "path": "chapters/openqa/system.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Our System: \\sys{DrQA}}\n\\label{sec:drqa}\n\n\\subsection{An Overview}\n\nIn the following we describe our system \\sys{DrQA}, which focuses on answering questions using English Wikipedia as the unique knowledge source for documents. We are interested in building a general-knowledge question answering system, which can answer any sort of factoid questions where the answer is contained in and can be extracted from Wikipedia.\n\nThere are several reasons that we choose to use Wikipedia: 1) Wikipedia is a constantly evolving source of large-scale, rich, detailed information that could facilitate intelligent machines. Unlike knowledge bases (KBs) such as \\sys{Freebase} or \\sys{DBPedia}, which are easier for computers to process but too sparsely populated for open-domain question answering, Wikipedia contains up-to-date knowledge that humans are interested in. 2) Many reading comprehension datasets (e.g., \\sys{SQuAD}) are built on Wikipedia so that we can easily leverage these resources and we will describe it soon. 3) Generally speaking, Wikipedia articles are clean, high-quality and well-formed and thus they are highly useful resources for open domain question answering.\n\nUsing Wikipedia articles as the knowledge source causes the task of question answering (QA) to combine the challenges of both large-scale open-domain QA and of machine comprehension of text. In order to answer any question, one must first retrieve the few relevant articles among more than 5 million items, and then scan them carefully to identify the answer. This is reminiscent of how classical TREC QA systems worked, but we believe that neural reading comprehension models will play a crucial role of \\ti{reading} the retrieved articles/passages to obtain the final answer. As shown in Figure \\ref{fig:drqa-system}, our system essentially consists of two components: (1) the \\sys{Document Retriever} module for finding relevant articles and (2) a reading comprehension model, \\sys{Document Reader}, for extracting answers from a single document or a small collection of documents.\n\nOur system treats Wikipedia as a collection of articles and does not rely on its internal graph structure. As a result, our approach is generic and could be switched to other collections of documents, books, or even daily updated newspapers. We detail the two components next.\n\n\\subsection{Document Retriever}\n\\label{sec:doc-retriever}\nFollowing classical QA systems, we use an efficient (non-machine learning) document retrieval system to first narrow our search space and focus on reading only articles that are likely to be relevant. A simple inverted index lookup followed by term vector model scoring performs quite well on this task for many question types, compared to the built-in ElasticSearch based Wikipedia Search API \\cite{gormley2015elasticsearch}. Articles and questions are compared as TF-IDF weighted bag-of-word vectors.\n\nWe further improve our system by taking local word order into account with n-gram features. Our best performing system uses bigram counts while preserving speed and memory efficiency by using the hashing of \\cite{weinberger2009feature} to map the bigrams to $2^{24}$ bins with an unsigned \\emph{murmur3} hash.\n\nWe use the \\sys{Document Retriever} as the first part of our full model, by setting it to return 5 Wikipedia\narticles given any question. Those articles are then processed by the \\sys{Document Reader}.\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[height=8cm]{img/drqa_system.pdf}\n\\end{center}\n\\longcaption{An overview of DrQA system}{\\label{fig:drqa-system} An overview of our question answering system DrQA.}\n\\end{figure}\n\n\\subsection{Document Reader}\nThe \\sys{Document Reader} takes the top 5 Wikipedia articles and aims to read all the paragraphs and extracts the possible answers from them. This is exactly the setup as we did in span-based reading comprehension problems, and the \\sys{Stanford Attentive Reader} model that we described in Section~\\ref{sec:sar} can be directly plugged into this pipeline.\n\nWe apply our trained \\sys{Document Reader} for each single paragraph that appears in the top 5 Wikipedia articles and it predicts an answer span with a confidence score. To make scores compatible across paragraphs in one or several retrieved documents, we use the unnormalized exponential and take argmax over all considered paragraph spans for our final prediction. This is just a very simple heuristic and there are better ways to aggregate evidence over different paragraphs. We will discuss future work in Section~\\ref{sec:openqa-future}.\n\n\\subsection{Distant Supervision}\nWe have built a complete pipeline which integrates a classical retrieval module and our previous neural reading comprehension component. The remaining key question is how can we train this reading comprehension module for the open-domain question answering setting?\n\nThe most direct approach is just to reuse the SQuAD dataset~\\cite{rajpurkar2016squad} as the training corpus, which was also built on top of Wikipedia paragraphs. However, this approach is limited in the following ways:\n\n\\begin{itemize}\n    \\item\n        As we discussed earlier in Section~\\ref{sec:future-datasets}, the questions in \\sys{SQuAD} were crowdsourced after the annotators see the paragraphs to ensure they can be answered by a span in the passage. This distribution is quite specific and different from that of real-world question-answering when people have a question in mind first and try to find out he answers from the Web or other sources.\n    \\item\n        Many \\sys{SQuAD} questions are indeed context-dependent. For example, a question is \\ti{What individual is the school named after?} posed on one passage of the Wikipedia article \\ti{Harvard University}, or another question is \\ti{What did Luther call these donations?} based on a passage that describes \\ti{Martin Luther}. Basically, these questions cannot be understood by themselves and thus are useless for open-domain QA problems. \\newcite{clark2018simple} estimated around 32.6\\% of the questions in \\sys{SQuAD} are either document-dependent or passage-dependent.\n    \\item\n        Finally, the size of SQuAD is rather small (80k training examples). It should further improve the system performance if wen can collect more training examples.\n\\end{itemize}\n\nTo overcome these problems, we propose a procedure to automatically create additional training examples from other question answering resources. The idea is to re-use the efficient information retrieval module that we built: if we already have a question answer pair $(q, a)$ and the retrieval module can help us find a paragraph relevant to the question $q$ and the answer segment $a$ appears in the paragraph, then we can create a \\ti{distantly-supervised} training example in the form of a $(p, q, a)$ triple for training the reading comprehension models:\n\n\\begin{eqnarray}\n   & f: (q, a) \\Longrightarrow (p, q, a) \\\\\n    & \\text{ if } p \\in \\text{ Document\\_Retriever }(q) \\text{ and } a \\text{ appears in } p \\nonumber\n\\end{eqnarray}\n\nThis idea is a similar spirit to the popular approach of using distant supervision (DS) for relation extraction \\cite{mintz2009distant} \\footnote{The idea for relation extraction is to pair textual mentions which contain the two entities which is known as a relation between them in an existing knowledge base.}. Despite that these examples can be noisy to some extent, it offers a cheap solution to create distantly supervised examples for open-domain question answering and will be a useful addition to \\sys{SQuAD} examples. We will describe the effectiveness of these distantly supervised examples in Section~\\ref{sec:drqa-eval}.\n"
  },
  {
    "path": "chapters/rc_future/datasets.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Future Work: Datasets}\n\\label{sec:future-datasets}\n\nWe have mostly focused on \\sys{CNN/Daily Mail} and \\sys{SQuAD} and demonstrated that both 1) neural models are able to achieve either super-human or the ceiling performance on them; 2) although these datasets are highly useful, most of the examples are rather simple and don't require much reasoning yet.  What desired properties are still missing in these datasets? What kind of datasets should we work on next? And how to collect better datasets?\n\n% still a quite restricted setup: (a) the crowdworkers can see the passage when they write the questions. As a result, there is usually a high lexical overlap between the question and the paragraph and thus it greatly eases the difficulty of answering these questions;  (b) questions are only allowed when they can be answered using a single span in the passage and this excludes many possible questions from the dataset such as those \\ti{yes/no}, \\ti{counting} or \\ti{why} questions; (c) it is known that most of the questions in \\sys{SQuAD} don't really need complex reasoning (combining facts from multiple sentences or background knowledge) and they are usually not compositional (which needs to be decomposed into multiple steps of simple questions).\n\nWe think that datasets like \\sys{SQuAD} mainly have the following limitations:\n\\begin{itemize}\n    \\item\n        The questions are \\ti{posed based on the passage}.  That said, if a questioner is looking at the passage while they ask a question, they are quite likely to mirror the sentence structure and to reuse the same words. This eases the difficulty of answering questions as many questions words are overlapping with the passage words.\n    \\item\n        It only allows questions that are \\ti{answerable by a single span in the passage}. This not only implies all the questions are answerable, but also excludes many possible questions to be posed such as \\ti{yes/no}, \\ti{counting} questions. As we discussed earlier, most of the questions in \\sys{SQuAD} are factoid questions and the answers are generally short (3.1 tokens on average). Therefore, there are also very few \\ti{why} (cause and effect) and \\ti{how} (procedure) questions in the dataset.\n    \\item\n        Most of the questions can be answered by \\ti{a single supporting sentence} in the passage and don't require multiple-sentence reasoning. \\newcite{rajpurkar2016squad} estimated that only $13.6\\%$ of the examples need multiple sentence reasoning. Among them, we think that most of the cases are resolving conferences, which might be solved by a coreference system.\n\\end{itemize}\n\nTo address these limitations, there have been a number of new datasets collected recently. They follow a similar paradigm of \\sys{SQuAD} but are constructed in various ways. Table~\\ref{tab:recent-datasets} gives an overview of a few representative datasets. As we can see, these datasets are of a similar order of magnitude (ranging from 33k to 529k training examples), and there is still a gap between the state-of-the-art and the human performance (some gaps are bigger than the others though). In the following, we describe these datasets in detail and discuss how they tackle the aforementioned limitations and their advantages/disadvantages:\n\n\\begin{table}[t]\n    \\centering\n    \\small\n    \\begin{tabular}{l | c c c | c | c c c}\n      \\toprule\n      \\tf{Dataset} & \\tf{\\#Train} & \\tf{\\#Dev} & \\tf{\\#Test} & \\tf{Domain} & \\tf{Metric} & \\tf{Human} & \\tf{SOTA} \\\\\n      \\midrule\n      \\sys{TriviaQA} (Web) & 528,979 & 68,621 & 65,059 & Web & F1 & N/A\\footnote{\\newcite{joshi2017triviaqa} provided oracles scores of \\ti{exact match} accuracies of 82.8\\% and 83.0\\% of the Web and Wikipedia domain respectively. These numbers measure the percentage of examples that answer can be found in the documents and differ from human performance.} & 71.3 \\\\\n      \\sys{TriviaQA} (Wiki.)\\footnote{In contrast to the Web domain of \\sys{TriviaQA}, the Wikipedia domain is evaluated over questions instead of documents.} & 61,888 & 9,951 & 9,509 & { Wikipedia} & F1 & N/A & 68.9 \\\\\n      \\sys{RACE} & 87,866 & 4,887 & 4,934 & Exams & Accuracy & 100.0 & 59.0 \\\\\n      \\sys{NarrativeQA}\\footnote{We only list the setting where the summaries are given.} & 32,747 & 3,461 & 10,557 & Wikipedia & ROUGE-L & 57.0 & 36.3 \\\\\n      \\sys{SQuAD 2.0} & 130,319 & 11,873 & 8,862 & Wikipedia & F1 & 89.5 & 83.1 \\\\\n      \\sys{HotpotQA}\\footnote{We only list the ``distractor'' setting.}  & 90,564 & 7,405 & 7,405 & {Wikipedia} & F1 & 91.4 & 59.0 \\\\\n      \\bottomrule\n    \\end{tabular}\n    \\longcaption{A summary of more recent reading comprehension datasets}{\\label{tab:recent-datasets}A summary of more recent reading comprehension datasets. We only show the F1 results for span-prediction tasks and ROUGE-L for free-form answer tasks. The state-of-the-art results are taken from \\newcite{clark2018simple} for \\sys{TriviaQA}~\\cite{joshi2017triviaqa}, \\newcite{radford2018improving} for \\sys{RACE}~\\cite{lai2017race}, \\newcite{kovcisky2018narrativeqa} for \\sys{NarrativeQA}, \\newcite{devlin2018bert} for \\sys{SQuAD 2.0}~\\cite{rajpurkar2018know} and \\newcite{yang2018hotpotqa} for \\sys{HotpotQA}.}\n\\end{table}\n\n\\paragraph{TriviaQA~\\cite{joshi2017triviaqa}.} The key idea of this dataset is that question/answer pairs were collected \\ti{before} constructing the corresponding passages. More specifically, they gathered 95k question-answer pairs from trivia and quiz-league websites and collected textual evidence which contained the answer from either Web search results or Wikipedia pages corresponding to the entities which are mentioned in the question. As a result, they collected 650k (passage, question, answer) triples in total. This paradigm effectively solved the problem that questions were dependent on the passage and also it is easier to construct a large dataset cheaply. It is worth noting that the passages used in this dataset are mostly long documents (the average document length is 2,895 words and it is 20 times longer than that of \\sys{SQuAD}), and also posed a challenge of scalability for existing models.  However, it has a similar problem to the \\sys{CNN/Daily Mail} dataset --- as the dataset was curated heuristically, there is no guarantee that the passage really provides the answer to the question and this influences the quality of the training data.\n\n\\paragraph{RACE~\\cite{lai2017race}.} Humans' standardized tests are a proper way to evaluate machines' reading comprehension abilities. \\sys{RACE} is a multiple choice dataset collected from the English exams for middle-school and high-school Chinese students within the 12–-18 age range. All the questions and answer options were created by experts. As a result, the dataset is more difficult than most existing datasets and it was estimated that 26\\% of the questions require multiple sentence reasoning. The state-of-the-art performance is only 59\\% so far (each question has 4 candidate answers).\n\n\\paragraph{NarrativeQA~\\cite{kovcisky2018narrativeqa}.} This is a challenging dataset and it required crowdworkers to ask questions based on the plot summaries of a book or a movie from Wikipedia. The answers are free-form human-generated text and in particular, the annotators were encouraged to use their own words and copying is not allowed in the interface. The plot summaries usually contain more characters and events and more complex to follow than news articles or Wikipedia paragraphs. The dataset consists of two settings: one is to answer questions base on the summary (659 tokens on average) which is more similar to \\sys{SQuAD}, and the other is to answer questions based on the full book or movie script (62,528 tokens on average). The second setting is especially difficult, as it requires IR components to locate relevant information in the long documents. One problem with this dataset is that human agreement is low due to its free-form answer form and thus it is difficult to evaluate.\n\n\\paragraph{SQuAD 2.0~\\cite{rajpurkar2018know}.} \\sys{SQuAD 2.0} proposed to add 53,775 negative examples to the original \\sys{SQuAD} dataset. These questions are not answerable from the passage, but look similar to the positive ones (relevant and the passage contains a plausible answer). To work well on the dataset, systems need to not only answer questions but also determine when no answer is supported by the paragraph and abstain from answering. This is an important aspect in practical applications but has been omitted in previous datasets.\n\n\\paragraph{HotpotQA~\\cite{yang2018hotpotqa}.} This dataset aims to construct questions which need multiple supporting documents to answer. To approach this, the crowdworkers were required to ask questions based on two relevant Wikipedia paragraphs (there is a hyperlink from the first paragraph of one article to the other). It also offers a new type of factoid comparison question, for which systems need to compare two entities on some shared properties. The dataset consists of two settings for evaluation -- one is called the \\ti{distractor} setting in which each question is provided 10 passages, including the two passages used for constructing the question and 8 distractor passages retrieved from Wikipedia; the second setting is to use the full Wikipedia to answer the question.\n\nCompared to \\sys{SQuAD}, these datasets either require more complex reasoning cross sentences or documents, or need to handle longer documents, or need to generate free-form answers instead of extracting a single span, or predict when there is no answer in the passage. They posed new challenges and many are still beyond the scope of existing models. We believe that these datasets will further inspire a series of modeling innovations in the future. After our models can reach the next level of performance, we will need to set out to construct even more difficult datasets to solve.\n"
  },
  {
    "path": "chapters/rc_future/models.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Future Work: Models}\n\\label{sec:future-models}\n\nNext we turn to the research directions of models for future work. We first describe the desiderata of reading comprehension models. Most of the existing work only focuses on \\ti{accuracy} --- given a standard training/development/testing split of a dataset, the major goal is to get the best accuracy score on the testing set. However, we argue that there are other important aspects which have been overlooked that we will need to work on in the future, including \\ti{speed and scalability}, \\ti{robustness} and \\ti{interpretability}. Lastly, we discuss what important elements are still missing in the current models, to solve more difficult reading comprehension problems.\n\n\\subsection{Desiderata}\nBesides \\ti{accuracy} (achieving a better performance score on a standard dataset), the following desiderata are also very important for future work:\n\n\\paragraph{Speed and Scalability.} How to build faster models (for both training and inference) and how to scale to longer documents is an important direction to pursue. Building faster models for training can lead to lower turnaround time for experimentation and also enable us to train on bigger datasets. Building faster models for inference is highly useful when we deploy the models in practical use. Also, it is unrealistic to encode a very long document (e.g., \\sys{TriviaQA}) or even a full book (e.g., \\sys{NarrativeQA}) using an RNN and this still remains a severe challenge. For example, the average document length of \\sys{TriviaQA} is 2,895 tokens and the authors truncated the documents to the first 800 tokens for the sake of scalability. The average document length of \\sys{NarrativeQA} is 62,528 tokens and the authors have to first retrieve a small number of relevant passages from the story using an IR system.\n\nExisting solutions to these problems include:\n\\begin{itemize}\n    \\item\n        Replacing LSTMs with non-recurrent models such as \\sys{Transformer}~\\cite{vaswani2017attention} or lighter recurrent units such as \\sys{SRU}~\\cite{lei2018simple} as we discussed in Section~\\ref{sec:alt-lstms}.\n    \\item\n        Training models which learn to skip part of the documents so they don't need to read all of the content. These models can run much faster while still retaining a similar performance. Representative works in this line include \\newcite{yu2017learning} and \\newcite{seo2018neural}.\n    \\item\n        The choice of optimization algorithms can also greatly affect the convergence speed. Multi-GPU training and hardware performance are also important aspects to consider but they are beyond the scope of this thesis. \\newcite{coleman2017dawnbench} provide a benchmark\\footnote{\\href{https://dawn.cs.stanford.edu/benchmark/}{https://dawn.cs.stanford.edu/benchmark/}} which measures the end-to-end training and inference time to achieve a state-of-the-art accuracy level for a wide range of tasks, including \\sys{SQuAD}.\n\\end{itemize}\n\n\n\\paragraph{Robustness.} We discussed in Section~\\ref{sec:squad-errors} that existing models are very brittle to adversarial examples which will become a severe problem when we deploy these models in the real world. Moreover, most of the current works follow the standard paradigm: training and evaluating on the splits of one dataset. It is known that if we train our models on one dataset and evaluate on another dataset, the performance will drop dramatically due to their different source of text and construction methods. For future work, we need to consider:\n\\begin{itemize}\n    \\item How to create better adversarial training examples and incorporate them into the training process.\n    \\item Researching more on transfer learning and multi-task learning, so that we can build models with high performance across various datasets.\n    \\item We might need to break the standard paradigm of supervised learning, and think about how to create better ways of evaluating our current models for the sake of building more robust models.\n\\end{itemize}\n\n\\paragraph{Interpretability.} The last important aspect is \\ti{interpretability} and it has been mostly ignored in the current systems.  Our future systems should not only be able to provide the final answers, but also provide the rationales behind their predictions, so users can decide if they can trust the outputs and leverage them or not. Neural networks are especially notorious for the fact that the end-to-end training paradigm makes these models like a black box and it is hard to interpret their predictions. This is especially crucial if we want to apply these systems to medical or legal domains.\n\nInterpretability can have different definitions. In our context, we think there could be several ways to approach that:\n\\begin{itemize}\n    \\item\n        The easiest way is to require the models to learn to extract input pieces from the documents as supporting evidence. This has been studied before (e.g., \\cite{lei2016rationalizing}) for sentence classification problems but not yet in reading comprehension problems.\n    \\item\n        A more complex way is that the models can indeed generate rationales. Instead of only highlighting the relevant piece of information in the passage, the models need to interpret how these pieces are connected and finally get to the answer. Take Figure~\\ref{fig:sar-squad-errors} (c) as an example, the systems need to interpret that the two cities are the two largest and 3.7 million is bigger than 1.3 million thus it is the second largest. We think this desiderata is very important but far beyond the scope of current models.\n    \\item\n        Finally, another important aspect to consider is what training resources we can get to approach this level of interpretability. Inferring rationales from the final answers is feasible but quite difficult. We should consider collecting human explanations as the supervision of training rationales in the future.\n\\end{itemize}\n\n\\subsection{Structures and Modules}\nIn this section, we are going to discuss what are the missing elements in the current models, if we want to solve more difficult reading comprehension problems.\n\nFirst of all, current models are all built on either sequence models or tackle all pairs of words symmetrically (e.g., \\sys{Transformer}), and omit the inherent structure of language. On the one hand, this forces our models to learn all the relevant linguistic information from scratch, which makes the learning of our models more difficult. On the other hand, the NLP community has put a lot of effort into studying linguistic representation tasks (e.g., syntactic parsing, coreference) and building many linguistic resources and tools for years. Language encodes meaning in terms of hierarchical, nested structures on sequences of words. Would it be still useful to encode linguistic structures more explicitly in our reading comprehension models?\n\nFigure~\\ref{fig:corenlp-output} illustrates the \\sys{CoreNLP}~\\cite{manning2014stanford} output of several examples in \\sys{SQuAD}. We believe that this structural information would be useful as follows:\n\n\\begin{enumerate}[(a)]\n    \\item\n        The information that \\ti{2,400} is a \\ti{numeric modifier} of \\ti{professors} should help answer the question \\ti{What is the total number of professors, instructors, and lecturers at Harvard?} (We have seen this example as an error case in Figure~\\ref{fig:sar-squad-errors}).\n    \\item\n        The coreference information that \\ti{it} refers to \\ti{Harvard} should help answer the question \\ti{Starting in what year has Harvard topped the Academic Rankings of World Universities?}.\n\\end{enumerate}\n\nTherefore, we think that these linguistic knowledge/structures would be still a useful addition to the current models. The remaining questions that we need to answer are: 1) What are the best ways to incorporate these structures into sequence models? 2) Do we want to model the structures as a latent variable or rely on off-the-shelf linguistic tools? For the latter case, are the current tools good enough so that the models can benefit more (rather than suffering from noise)? Can we further improve the performance of these representation tasks?\n\n\\begin{figure}[t]\n  \\center\n  (a)\n  \\includegraphics[scale=0.20]{img/dep_example.png}\n  (b)\n  \\includegraphics[scale=0.42]{img/coref_example.png}\n  \\longcaption{Example output of \\sys{CoreNLP}: dependencies and coreference}{\\label{fig:corenlp-output} Example output of \\sys{CoreNLP}: (a) dependencies and (b) coreference. The image is taken from \\href{http://corenlp.run}{http://corenlp.run}.}\n\\end{figure}\n\nAnother aspect we think is still missing from most existing models is \\ti{modules}. The task of reading comprehension is inherently very complex and different types of examples require different types of reasoning capabilities. It still remains a grand challenge if we want to learn everything through a giant neural network (This is reminiscent of why the attention mechanism was proposed because we don't want to squash the meaning of a sentence or a paragraph into one vector!). We believe that, if we want to approach deeper level of reading comprehension, our future models will be more structured, more modularized, and solving one comprehensive task can be decomposed into many subproblems and we can tackle each smaller subproblem (e.g., each reasoning type) separately and combine all of them in the end.\n\nThe idea of \\ti{modules} has been implemented in \\sys{Neural Module Networks (NMN)} \\cite{andreas2016learning} before. They first perform a dependency parse of the question, and then decompose the question answering problem into several ``modules'' based on the parse structure. One example they used for a visual question answering (VQA) task is: a question ``What color is the bird?'' can be decomposed as two modules. One module is used to detect the bird in the given image, and another module is used to detect the color of the found region (bird). We believe that this sort of approach holds promise to answer questions such as \\ti{What is the population of the second largest city in California?} (Figure~\\ref{fig:sar-squad-errors} (c)). However, \\sys{NMN} has only been studied on visual question answering or small knowledge-base question question problems so far, and applying to reading comprehension problems can be more challenging due to the flexibility of language variations and question types.\n"
  },
  {
    "path": "chapters/rc_future/overview.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n% \\section{Introduction}\nIn the previous chapter, we have described how neural reading comprehension models succeeded in current reading comprehension benchmarks and their key insights. Despite its rapid progress, there is still a long way to go towards genuine human-level reading comprehension. In this chapter, we will discuss future work and open questions.\n\nWe first examine the error cases of existing models in Section~\\ref{sec:squad-errors}, and conclude that they still fail on ``easy'' or ``trivial'' cases despite their high accuracies on average.\n\nAs we discussed earlier, the success of recent reading comprehension is attributed to both the creation of large-scale datasets and the development of neural reading comprehension models. In the future, we believe both components will be still equally important. We  discuss the future work of datasets and models respectively in Section~\\ref{sec:future-datasets} and \\ref{sec:future-models}. What is still missing in the existing datasets and models? How can we approach that?\n\nFinally, we review several important research questions in this field in Section~\\ref{sec:research-questions}.\n\n\\section{Is SQuAD Solved Yet?}\n\\label{sec:squad-errors}\n\nAlthough we have already achieved super-human performance on the \\sys{SQuAD} dataset, does it indicate that our reading comprehension models are capable of solving all the \\sys{SQuAD} examples or any examples with the same level of difficulty?\n\nFigure~\\ref{fig:sar-squad-errors} demonstrates some failure cases of our \\sys{Stanford Attentive Reader} model described in Section \\ref{sec:sar}. As we can see, the model predicts the answer type perfectly for all these examples: it predicts a number for the question \\ti{what is the total number of \\ldots ?} and \\ti{what is the population \\ldots ?} and a team name for the question \\ti{Which team won Super Bowl 50?}. However, the model failed to understand the subtleties expressed in the text and can't distinguish among the candidate answers. In more detail,\n\n\n\\begin{enumerate}[(a)]\n  \\item The number \\ti{2,400} modifies \\ti{professors, lecturers, and instructors} while \\ti{7,200} modifies \\ti{undergraduates}. However, the system failed to identify that and we believe that linguistic structures (e.g., syntactic parsing) can help resolve this case.\n  \\item Both teams \\ti{Denver Broncos} and \\ti{Carolina Panthers} are modified by the word \\ti{champion}, but the system failed to infer that ``X defeated Y'' so ``X won''.\n  \\item The system predicted \\ti{100,000} probably because it is closer to the word \\ti{population}. However, to answer the question correctly, the system has to identify that \\ti{3.7 million} is the population of \\ti{Los Angles}, and \\ti{1.3 million} is the population of \\ti{San Diego} and compare the two numbers and infer that \\ti{1.3 million} is the answer because it is \\ti{second largest}. This is a difficult example and probably beyond the scope of all the existing systems.\n\\end{enumerate}\n\n\\begin{figure}[p]\n    \\centering\n    \\begin{tabular}{l | p{13.5cm}}\n    \\hline\n    (a) &\\tf{Question}: What is the total number of professors, instructors, and lecturers at Harvard? \\\\\n    & \\tf{Passage}: Harvard's \\blue{2,400} professors, lecturers, and instructors instruct \\red{7,200} undergraduates and 14,000 graduate students. The school color is crimson, which is also the name of the Harvard sports teams and the daily newspaper, The Harvard Crimson. The color was unofficially adopted (in preference to magenta) by an 1875 vote of the student body, although the association with some form of red can be traced back to 1858, when Charles William Eliot, a young graduate student who would later become Harvard's 21st and longest-serving president (1869–-1909), bought red bandanas for his crew so they could more easily be distinguished by spectators at a regatta. \\\\\n    & \\tf{Gold answer}: 2,400 \\\\\n    & \\tf{Predicted answer}: 7,200 \\\\\n    \\hline\n    (b) & \\tf{Question}: Which team won Super Bowl 50? \\\\\n    & \\tf{Passage}: Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion \\blue{Denver Broncos} defeated the National Football Conference (NFC) champion \\red{Carolina Panthers} 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50. \\\\\n    & \\tf{Gold answer}: Denver Broncos \\\\\n    & \\tf{Predicted answer}: Carolina Panthers \\\\\n    \\hline\n    (c) & \\tf{Question}: What is the population of the second largest city in California? \\\\\n    & \\tf{Passage}: Los Angeles (at 3.7 million people) and San Diego (at \\blue{1.3 million} people), both in southern California, are the two largest cities in all of California (and two of the eight largest cities in the United States). In southern California there are also twelve cities with more than 200,000 residents and 34 cities over \\red{100,000} in population. Many of southern California's most developed cities lie along or in close proximity to the coast, with the exception of San Bernardino and Riverside. \\\\\n    & \\tf{Gold answer}: 1.3 million \\\\\n    & \\tf{Predicted answer}: 100,000 \\\\\n    \\hline\n    \\end{tabular}\n    \\longcaption{Failure cases of our model on SQuAD}{\\label{fig:sar-squad-errors}Several failure cases of our model on \\sys{SQuAD}. Gold answers are marked as \\blue{blue} and predicted answers are marked as \\red{red}.}\n\\end{figure}\n\n\\begin{figure}[p]\n    \\centering\n    \\small\n    \\begin{tabular}{l | p{13.5cm}}\n    \\hline\n    (d) &\\tf{Question}: What is the least number of members a board of trustees can have? \\\\\n    & \\tf{Passage}: The Book of Discipline is the guidebook for local churches and pastors and describes in considerable detail the organizational structure of local United Methodist churches. All UM churches must have a board of trustees with at least \\blue{three} members and no more than \\red{nine} members and it is recommended that no gender should hold more than a 2/3 majority. All churches must also have a nominations committee, a finance committee and a church council or administrative council. Other committees are suggested but not required such as a missions committee, or evangelism or worship committee. Term limits are set for some committees but not for all. The church conference is an annual meeting of all the officers of the church and any interested members. This committee has the exclusive power to set pastors' salaries (compensation packages for tax purposes) and to elect officers to the committees. \\\\\n    & \\tf{Gold answer}: three \\\\\n    & \\tf{Predicted answer}: nine \\\\\n    \\hline\n    (e) & \\tf{Question}: Where does centripetal force go? \\\\\n    & \\tf{Passage}: where  is the mass of the object,  is the velocity of the object and  is the distance to the center of the circular path and  is the unit vector pointing in the radial direction outwards from the center. This means that the unbalanced centripetal force felt by any object is always directed toward \\blue{the center of the curving path}. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. The unbalanced force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which \\red{changes its direction}. \\\\\n    & \\tf{Gold answer}: the center of the curving path \\\\\n    & \\tf{Predicted answer}: changes its direction \\\\\n    \\hline\n    (f) & \\tf{Question}: How many times have the Panthers been in the Super Bowl? \\\\\n    & \\tf{Passage}: The Panthers finished the regular season with a 15–1 record, and quarterback Cam Newton was named the NFL Most Valuable Player (MVP). They defeated the Arizona Cardinals 49–15 in the NFC Championship Game and advanced to their \\blue{second} Super Bowl appearance since the franchise was founded in 1995. The Broncos finished the regular season with a 12–4 record, and denied the New England Patriots a chance to defend their title from Super Bowl XLIX by defeating them 20–18 in the AFC Championship Game. They joined the Patriots, Dallas Cowboys, and Pittsburgh Steelers as one of four teams that have made \\red{eight} appearances in the Super Bowl. \\\\\n    & \\tf{Gold answer}: second \\\\\n    & \\tf{Predicted answer}: eight \\\\\n    \\hline\n    \\end{tabular}\n    \\longcaption{Failure cases of the currently best model (\\sys{BERT} ensemble) on SQuAD}{\\label{fig:bert-squad-errors}Several failure cases of the currently best model (\\sys{BERT} ensemble) on \\sys{SQuAD}. Gold answers are marked as \\blue{blue} and predicted answers are marked as \\red{red}.}\n\\end{figure}\n\n\\begin{figure}[!h]\n    \\centering\n    \\begin{tabular}{p{13.5cm}}\n    \\hline\n      \\tf{Question}: What is the name of the quarterback who was 38 in Super Bowl XXXIII? \\\\\n      \\tf{Passage}: Peyton Manning became the first quarterback ever to lead two different teams to multiple Super Bowls. He is also the oldest quarterback ever to play in a Super Bowl at age 39. The past record was held by \\blue{John Elway}, who led the Broncos to victory in Super Bowl XXXIII at age 38 and is currently Denver’s Executive Vice President of Football Operations and General Manager. \\ti{Quarterback \\red{Jeff Dean} had jersey number 37 in Champ Bowl XXXIV.} \\\\\n    \\hline\n    \\end{tabular}\n    \\longcaption{An adversarial example used in ~\\cite{jia2017adversarial}}{\\label{fig:adversarial-example}An adversarial example used in ~\\cite{jia2017adversarial}, where a distracting sentence is added to the end of the passage (italicized). \\blue{Blue}: the correct answer and \\red{red}: the predicted answer.}\n\\end{figure}\n\nWe also took a closer look at the predictions of the best SQuAD model so far --- an ensemble of 7 \\sys{BERT} models \\cite{devlin2018bert}. As is demonstrated in Figure~\\ref{fig:bert-squad-errors}, we can see that this strong model still makes some simple mistakes which humans hardly make. It is fair to conjecture that these models have been doing very sophisticated matching of text while they still have difficulty understanding the inherent structure between entities and the events expressed in the text.\n\n\nLastly, \\newcite{jia2017adversarial} find that if we add a distracting sentence to the end of the passage (see an example in Figure~\\ref{fig:adversarial-example}), the average performance of current reading comprehension systems will drop drastically from 75.4\\% to 36.4\\%. These distracting sentences have word overlap with the question while not actually contradict the correct answer and not mislead human understanding. The performance is even worse if the distracting sentence is allowed to add ungrammatical sequences of words. These results suggest that 1) The current models strongly depend on the lexical cues between the passage and the question. That's why the distracting sentences can be so destructive; 2) Even though the models achieved a high accuracy on the original development set, they are really not robust to the adversarial examples. This is a critical problem of the standard supervised learning paradigm and it makes existing models difficult to deploy in the real world. We will discuss the property of robustness more in Section~\\ref{sec:future-models}.\n\nTo sum up, we believe that, although a very high accuracy was already obtained on the \\sys{SQuAD} dataset, the current models only focus on the surface-level information of the text, and still make simple errors when it comes to a (slightly) deeper level of understanding. On the other hand, the high accuracies also indicate that most of the \\sys{SQuAD} examples are rather easy and require little understanding. There are difficult examples which require complex reasoning in \\sys{SQuAD} (for example, (c) in Figure~\\ref{fig:sar-squad-errors}), but due to their scarcity, their accuracies aren't really reflected in the averaged metric. Furthermore, the high accuracies only hold when training and development come from the same distribution, and it still remains a severe problem when they differ. In the next two sections, we discuss the possibilities of creating more challenging datasets and building more effective models.\n"
  },
  {
    "path": "chapters/rc_future/questions.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Research Questions}\n\\label{sec:research-questions}\n\nIn the last section, we discuss a few central research questions in this field, which still remain as open questions and yet to be answered in the future.\n\n\\subsection{How to Measure Progress?}\nThe first question is: \\ti{How can we measure the progress of this field?} The evaluation metrics are certainly clear indicators of measuring progress on our reading comprehension benchmarks. Does this indicate that we make real progress on reading comprehension in general? How can we tell if some progress on one benchmark can generalize to others? How about if model $A$ works better than model $B$ on one dataset, while model $B$ works better on the other dataset? How to tell how far these computer systems are sill from genuine human-level reading comprehension?\n\nOn the one hand, we think that taking human's standardized tests could be a good strategy for evaluating the performance of machine reading comprehension systems. These questions are usually carefully curated and designed to test human's reading comprehension abilities at different levels. To get computer systems aligned with human measurements is a proper way in building natural language understanding systems.\n% {\\red{TODO: Not always correct --- some questions are easy for humans to answer but difficult for machines}}.\n\nOn the other hand, we believe that it would be desirable to integrate many reading comprehension datasets as a testing suite for evaluation in the future, instead of only testing on one single dataset. This will help us better distinguish what are genuine progress for reading comprehension and what might be just overfitting to one specific dataset.\n\nMore importantly, we need to understand our existing datasets better: characterizing their quality and what skills are required to answer the questions. This will be a crucial step for building more challenging datasets and analyzing the behavior of our models. Besides our work on analyzing the \\sys{CNN/Daily Mail} examples in \\newcite{chen2016thorough}, \\newcite{sugawara2017evaluation} attempted to separate reading comprehension skills into two disjoint sets: \\ti{prerequisite skills} and \\ti{readability}.  Prerequisite skills measure different types of reasoning and knowledge required to answer the question and 13 skills are defined: object tracking, mathematical reasoning, coreference resolution, logical reasoning, analogy, causal relation, spatiotemporal relation, ellipsis, bridging, elaboration, meta-knowledge, schematic clause relation and punctuation. Readability measures the “text ease of processing”, and a wide range of linguistic features/human readability measurements are used. The authors concluded that these two sets are weakly correlated and it is possible to design difficult questions from the contexts that are easy to read. These studies suggest that we could construct datasets and develop models based on these properties separately.\n\nIn addition, \\newcite{sugawara2018what} designed a few simple filtering heuristics and divided the examples from many existing datasets into a hard subset and an easy subset, based on 1) whether the question can be answered using only the first few words; 2) whether the answer is contained in the most similar sentence in the passage. They observed that the baseline performances for the hard subsets remarkably degrade compared to those of the entire datasets. Moreover, \\newcite{kaushik2018how} analyzed the performance of existing models using passage-only or question-only information, and found that these models sometimes can work surprisingly well and hence there exists annotation artifacts in some of the existing datasets.\n\nIn conclusion, we believe that if we want to make steady progress on reading comprehension in the future, we will have to answer these basic questions about the difficulty of examples first. Understanding what is required for the datasets, what our current systems can do and can't do will help us identify the challenges we are facing and measure the progress.\n\n\\subsection{Representations vs. Architecture: Which is More Important?}\n\\label{sec:rep-vs-arch}\n\n\\begin{figure}[!t]\n    \\center\n    \\includegraphics[scale=0.45]{img/rep_vs_arch.pdf}\n    \\longcaption{A comparison of a complex architecture vs. a simple architecture with pre-training}{\\label{fig:rep-vs-arch}A comparison of a complex architecture (left) vs. a simple architecture with pre-training (right). The parameters in the dashed box can be pre-trained from unlabeled text, while all the remaining parameters are initialized randomly and learned from the reading comprehension datasets.}\n\\end{figure}\n\nThe second important question is to understand the role of representations vs. architectures to the performance of reading comprehension models. Since \\sys{SQuAD} was created, there has been a recent trend of increasing the complexity of neural architectures. In particular, more and more complex attention mechanisms have been proposed to capture the similarity between the passage and the question (Section~\\ref{sec:attention-mechanisms}). However, recent works~\\cite{radford2018improving,devlin2018bert} proposed that if we can pretrain a deep language model on large text corpora, a simple model which takes the concatenation of the question and the passage without modeling any direct interactions between the two can work extremely well on reading comprehension datasets such as \\sys{SQuAD} and \\sys{RACE} (see Table~\\ref{tab:squad-results} and Table~\\ref{tab:recent-datasets}).\n\nAs illustrated in Figure~\\ref{fig:rep-vs-arch}, the first class of models (left) only builds on top of word embeddings (each word type has a vector representation) pre-trained from unlabeled text, while all the remaining parameters (including all the weights to compute various attention functions) need to be learned from the limited training data. The second class of models (right) makes the model architecture very simple and it only models the question and passage as a single sequence. The whole model is pre-trained and all the parameters are kept. Only a few new parameters are added (e.g., the parameters for predicting the start and end positions for \\sys{SQuAD}) and the other parameters will be fine-tuned on the training set of the reading comprehension tasks.\n\nWe think these two classes of models indicate two extremes. On the one hand, it certainly demonstrates the incredible power of unsupervised representations. As we have a powerful language model pre-trained from huge amount of text, the model already encodes a great deal of properties about language while a simple model which concatenates the passage and the question is sufficient to learn the dependency between the two. On the other hand, when only word embeddings are given, it seems that modeling the interactions between the passage and the question carefully (or giving the model more prior knowledge)  helps. In the future, we suspect that we will need to combine the two and a model like \\sys{BERT} is too coarse to handle the examples which require complex reasoning.\n\n\n\\subsection{How Many Training Examples Are Needed?}\nThe third question is \\ti{how many training examples are actually needed?} We have discussed many times that the success of neural reading comprehension is driven by large-scale supervised datasets. All the datasets that we have been actively working on contain at least 50,000 examples. Can we always embrace data abundance and further improve the performance of our systems? Is it possible to train a neural reading comprehension model with only hundreds of annotated examples today?\n\nWe think there isn't a clear answer yet. On the one hand, there is clear evidence that having more data helps. \\newcite{bajgar2016embracing} demonstrated that inflating the cloze-style training data constructed from books available through project Gutenberg can provide a boost of 7.4\\%--14.8\\% on the \\sys{Children Book Test (CBT)} dataset~\\cite{hill2016goldilocks} using the same model. We discussed before that using data augmentation techniques~\\cite{yu2018qanet} or augmentating the training data with \\sys{TriviaQA} can help improve the performance on \\sys{SQuAD} (\\# training examples = 87,599).\n\nOn the other hand, pre-trained (language) models~\\cite{radford2018improving,devlin2018bert} can help us reduce the dependence on large-scale datasets. In these models, most of the parameters are already pretrained on abundant unlabeled data and will be only fine-tuned during training.\n\nIn the future, we should encourage more research on unsupervised learning and transfer learning. Leveraging unlabeled data (e.g., text) or other cheap resources or supervision (e.g., datasets like \\sys{CNN/Daily Mail}) will relieve us from collecting expensive annotated data. We also should seek better and cheaper ways of collecting supervised datasets.\n%\n% \\red{Chris: I think the main substantive thing missing here is a discussion of more difficult types of questions that probe deeper levels of Reading Comprehension. That is a middle school reading comprehension exercise normally is not so much about answering factoid style questions that but showing that you understood the reasoning and implications of the text and what the author is trying to convey. Often this is done with how/why questions: In the story, why is Cynthia upset with her mother? How does John attempt to make up for his original mistake? How does the author indicate that Benjamin is scared to be left alone? But there are other aspects of deeper comprehension too. We can argue about how successful they have been, but I think very clearly the goal of the AI2 Aristo work has been to try to have comprehension tests where you actually have to understand the underlying science of what is being discussed, rather than just answering from text matching. It would be good to have a paragraph or two on issues like this --- assessing deeper reading comprehension than question text matching.}\n"
  },
  {
    "path": "chapters/rc_models/advances.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Further Advances}\n\\label{sec:advances}\n\nIn this section, we summarize recent advances in neural reading comprehension. We divide them into the following four categories: {word representations}, {attention mechanisms}, {alternatives to LSTMs}, and {others} (such as training objectives, data augmentation). We give a summary and discuss their importance in the end.\n\n\n\\subsection{Word Representations}\nThe first category is better word representations for question and passage words, so the neural models are built off of better grounds. Learning better distributed word representations from text or finding the best set of word embeddings for specific tasks still remains an active research topic --- for example, \\newcite{mikolov2017advances} find that replacing \\sys{GloVe} pre-trained vectors with the new \\sys{fastText} vectors~\\cite{bojanowski2017enriching} in our model brings about 1 point of improvement on \\sys{SQuAD}. More than that, there are two key ideas which have been proved (highly) useful:\n\n\\subsubsection*{Character embeddings}\nThe first idea is to use character-level embeddings to represent words, which are especially helpful for rare or out-of-vocabulary words. Most of the existing works employ a \\sys{convolutional neural network} (CNN), which can usefully exploit the surface patterns of $n$-gram characters. More concretely, let  $\\mathcal{C}$ be the vocabulary of characters and each word type $x$ can be represented as a sequence of characters $(c_1, \\ldots, c_{|x|}), \\forall c_i \\in \\mathcal{C}$. We first map each character in $\\mathcal{C}$ into a $d_c$-dimensional vector, so word $x$ can be represented as $\\mf{c}_1, \\ldots, \\mf{c}_{|x|}$.\n\nNext we apply a convolution layer with a filter $\\mf{w} \\in \\R^{d_c \\times w}$ of width $w$, and we denote $\\mf{c}_{i:i+j}$ as the concatenation of $\\mf{c}_i, \\mf{c}_{i + 1}, \\ldots, \\mf{c}_{i + j}$. Therefore, for $i = 1, \\ldots, |x| - w + 1$, we can apply this filter $\\mf{w}$ and after which we add a bias $b$ and apply a nonlinearity $\\tanh$ as follows:\n\\begin{equation}\n    f_i = \\tanh\\left(\\mf{w}^{\\intercal} \\mf{c}_{i:i+w-1} + b \\right).\n\\end{equation}\nFinally we can apply a \\ti{max-pooling} operation on $f_1, \\ldots, f_{|x| - w + 1}$ and obtain one scalar feature:\n\\begin{equation}\n    f = \\max_{i}{\\{f_i\\}}\n\\end{equation}\nThis feature essentially picks out a character $n$-gram, where the size of the $n$-gram corresponds to the filter width $w$. We can repeat the above process by repeating $d^*$ different filters $\\mf{w}_1, \\ldots, \\mf{w}_{d^*}$. As a result, we can obtain a character-based word representation for each word type $\\mf{E}_c(x) \\in \\R^{d^*}$. All the character embeddings, filter weights $\\{\\mf{w}\\}$ and biases $\\{b\\}$ are learned during training. More details can be found in \\newcite{kim2014convolutional}.  In practice, the dimension of character embeddings $d_c$ usually takes a small value (e.g., 20), width $w$ usually takes $3 - 5$, while $100$ is a typical value for $d^*$.\n\n\\subsubsection*{Contextualized word embeddings}\nAnother important idea is \\ti{contextualized word embeddings}. Different from traditional word embeddings in which each word type is mapped to one single vector, contextualized word embeddings assign each word a vector as a function of the entire input sentence. These word embeddings can model better complex characteristics of word use (e.g., syntax and semantics) and how these uses vary across linguistic contexts (i.e., polysemy).\n\nA concrete implementation is \\sys{ELMo} detailed in \\newcite{peters2018deep}: their contextualized word embeddings are learned functions of the internal states of a deep bidirectional language model, which is pretrained on a large text corpus. Basically, given a sequence of words $(x_1, x_2, \\ldots, x_n)$, they run an $L$-layer forward LSTM and models the sequence probability as:\n\\begin{equation}\n    P(x_1, x_2, \\ldots, x_n) =  \\prod_{k = 1}^{n}P(x_k \\mid x_1, \\ldots, x_{k - 1})\n\\end{equation}\nOnly the top layer of the LSTM $\\overrightarrow{\\mf{h}}^{(L)}_k$ is used to predict the next token $x_{k + 1}$. Similarly, another $L$-layer LSTM is run backward and $\\overleftarrow{\\mf{h}}^{(L)}_k$ is used to predict the token $x_{k - 1}$. The overall training objective is to maximize the log-likelihood from both directions:\n\\begin{equation}\n  \\small\n    \\sum_{k=1}^{n}\\left({\\log P (x_k \\mid x_1, \\ldots, x_{k-1}; {\\Theta}_x, \\overrightarrow{{\\Theta}}_{\\text{LSTM}}, {\\Theta}_s ) + \\log P (x_k \\mid x_{k+1}, \\ldots, x_{n}; {\\Theta}_x, \\overleftarrow{{\\Theta}}_{\\text{LSTM}}, {\\Theta}_s )}\\right),\n\\end{equation}\nwhere $\\Theta_x$ and $\\Theta_s$ are word embeddings and softmax parameters and shared for both LSTMs. The final contextualized word embeddings are computed as a linear combination of all the biLSTM layers and the input word embeddings, multiplied by a linear scalar:\n\\begin{equation}\n    \\sys{ELMo}(x_k) = \\gamma \\left(s_0 \\mf{x}_k + \\sum_{j=1}^{L}{\\overrightarrow{s}_{j} \\overrightarrow{\\mf{h}}^{(j)}_k} + \\sum_{j=1}^{L}{\\overleftarrow{s}_{j} \\overleftarrow{\\mf{h}}^{(j)}_k} \\right)\n\\end{equation}\nAll the weights $\\gamma, s_0, \\overrightarrow{s}_{j}, \\overleftarrow{s}_{j}$ are task-specific and learned during the training process.\n\nThese contextualized word embeddings are usually used in conjunction with traditional word type embeddings and character embeddings. It turns out that this sort of contextualized word embeddings pre-trained on very large text corpora (e.g., 1B Word Benchmark~\\cite{chelba2014one}) has been highly effective. \\newcite{peters2018deep} demonstrated that adding ELMo embeddings ($L = 2$ biLSTM layers with $4096$ units and $512$ dimension projections) to an existing competitive model can bring the F1 score on \\sys{SQuAD} from $81.1$ to $85.8$ directly, a $4.7$ point of absolute improvement.\n\nEarlier than \\sys{Elmo}, \\newcite{mccann2017learned} proposed \\sys{CoVe}, which learned contextualized word embeddings in a neural machine translation framework, and the resulting encoder can be used in a similar way as an addition to the word embeddings. They also demonstrated a $4.3$ point of absolute improvement on \\sys{SQuAD}.\n\nVery recently, \\newcite{radford2018improving} and \\newcite{devlin2018bert} find that these contextualized word embeddings can not only be used as features of word representations in a task-specific neural architecture (a reading comprehension model in our context), but we can fine-tune the deep language models directly with minimal modifications to perform downstream tasks. This is indeed a very striking result at the time of writing this thesis and we will discuss it more in Section~\\ref{sec:rep-vs-arch} and there still remain many open questions to answer in the future. Additionally, \\newcite{devlin2018bert} proposed a clever way to train bidirectional language models: instead of always stacking LSTMs in one direction and predicting the next word,\\footnote{To make it clear, although ELMo adopts a biLSTM, it is essentially the use of two unidirectional LSTMs for predicting the next word in both directions.} they mask out some words at random at the input layer, stack bidirectional layers and predict these masked words at the top layer. They find this training strategy extremely useful empirically.\n\n\\subsection{Attention Mechanisms}\n\\label{sec:attention-mechanisms}\n\nThere have been a multitude of attention variants proposed for neural reading comprehension models, and they aim to capture semantic similarity between the question and the passage, at different levels, multiple granularity, or in a hierarchical way. A typical complex example in this direction can be found at \\cite{huang2018fusionnet}. To our best knowledge, there isn't a conclusion yet if there is one single variant that stands out. Our \\sys{Stanford Attentive Reader} (Section~\\ref{sec:sar}) takes the most simple possible form of attention (Figure~\\ref{fig:att-overview} illustrates an overview of different layers of attention). Besides that, we think there are two ideas which can generally further improve the performance of these systems:\n\n\\begin{figure}[t]\n\\centering\n\\vspace{1em}\n\\includegraphics[scale=0.25]{img/gen_fusion.pdf}\n\\vspace{1em}\n\n\\begin{tabular}{l|ccccc}\n\\hline\n\\bf Architectures & \\bf (1) & \\bf (2) & \\bf (2') & \\bf (3) & \\bf (3') \\\\ \\hline\nMatch-LSTM \\citep{wang2017machine} & & \\checkmark & & & \\\\\nDCN \\citep{xiong2017dynamic} & & \\checkmark & & & \\checkmark \\\\\nBiDAF \\citep{seo2017bidirectional} & & \\checkmark & & & \\checkmark \\\\\nRaSoR \\citep{lee2016learning} & \\checkmark & & \\checkmark & & \\\\\nR-net \\citep{wang2017gated} & & \\checkmark & & \\checkmark & \\\\\n\\hline\nOur model & \\checkmark & & & &  \\\\\n\\hline\n\\end{tabular}\n\\longcaption{A summary of different layers of attention.}{\\label{fig:att-overview} A summary of different layers of attention. Image courtesy: \\cite{huang2018fusionnet} with minimal modifications.}\n\\end{figure}\n\n\\subsubsection*{Bidirectional attention}\n\n\\newcite{seo2017bidirectional} first introduced the idea of \\ti{bidirectional attention}. In addition to what we already have, the key difference is that they have the \\ti{question-to-passage} attention, which signifies which passage words have the closest similarity to each of the question words. In practice, this can be implemented as: for each word in the question, we can compute an attention map over all the passage words, similar as we did in Equation~\\ref{eq:aligned_question} and \\ref{eq:aligned_question_attention}, but in opposite directions:\n\n\\begin{equation}\n    f_{q\\_align}(q_i) = \\sum_j{b_{i, j} \\mf{E}(p_j)}.\n\\end{equation}\nAfter this, we can simply feed $f_{q\\_align}(q_i)$ into the input layer of the question encoding (Section~\\ref{sec:question-encoding}).\n\nThe attention mechanism in \\newcite{seo2017bidirectional} is a bit more complex, but we think it is similar. We also argue that the attention function in this direction is less useful, as also demonstrated in \\newcite{seo2017bidirectional}. This is because the questions are generally short (10-20 words on average) and using one single LSTM for question encoding (without extra attention) is usually sufficient.\n\n\\subsubsection*{Self-attention over passage}\nThe second idea is \\ti{self-attention} over the passage words, first introduced in \\newcite{wang2017gated}.\\footnote{They named it as ``self-matching attention mechanism'' in the paper.} The intuition is that the passage words can be aligned to the other passage words, with the hope that it can address coreference problems and aggregate information (of the same entity) from multiple places in the passage.\n\nIn detail, \\newcite{wang2017gated} first compute the hidden vectors for the passage: $\\mf{p}_1, \\mf{p}_2, \\ldots, \\mf{p}_{l_p}$ (Equation~\\ref{eq:passage-lstm}), and then for each $\\mf{p}_i$, they apply an attention function over $\\mf{p}_1, \\mf{p}_2, \\ldots, \\mf{p}_{l_p}$ via one hidden layer of MLP (Equation~\\ref{eq:mlp-att}):\n\\begin{eqnarray}\n    a_{i, j} & =&  \\frac{\\exp\\left(g_{\\text{MLP}}(\\mf{p}_i, \\mf{p}_j)\\right)}{\\sum_{j'}\\exp\\left(g_{\\text{MLP}}(\\mf{p}_i, \\mf{p}_{j'})\\right)} \\\\\n    \\mf{c}_i & = & \\sum_{j}{a_{i, j}\\mf{p}_j}\n\\end{eqnarray}\nLater, $\\mf{c}_i$ and $\\mf{p}_i$ are concatenated and fed into another BiLSTM: $\\mf{h}^{(p)}_i = \\text{BiLSTM}(\\mf{h}^{(p)}_{i-1}, [\\mf{p}_i, \\mf{c}_i])$, and can be used as the final passage representations.\n\n\\subsection{Alternatives to LSTMs}\n\\label{sec:alt-lstms}\nAll the models we discussed so far are based on recurrent neural networks (RNNs), in particular, LSTMs. It is well known that increasing the depth of neural networks can improve the capacity of models and bring gains in performance~\\cite{he2016deep}. We also discussed earlier that deep BiLSTMs of $3$ or $4$ layers usually perform better than a single layer of BiLSTM (Section~\\ref{sec:imp-details}). However, we are facing two challenges as we further increase the depth of the LSTM models: 1) It gets more difficult to optimize due to the vanishing gradient problem; 2) Scalability becomes an issue as the training/inference time increases linearly as the number of layer grows. It is known that LSTMs are difficult to parallelize and thus scale poorly due to their sequential nature.\n\nOn the one hand, there are works which attempt to add highway connections~\\cite{srivastava2015training} or residual connections~\\cite{he2016deep} between layers, so it eases the optimization and enables training more layers of LSTMs. On the other hand, people set out to find replacements for LSTMs, getting rid of recurrent structures while still performing similarly or even better.\n\nThe most notable work in this line is the \\sys{Transformer} model proposed by Google researchers~\\cite{vaswani2017attention}. \\sys{Transformer} only builds on top of word embeddings and simple positional encodings with stacked self-attention layers and position-wise fully connected layers. With residual connections, this model is able to be trained fast with many layers. It first demonstrated superior performance on a machine translation task with $L = 6$ layers (each layer consists of a self-attention and a fully connected feedforward network), and then later was adapted by~\\cite{yu2018qanet} for reading comprehension.\n\nThe model called \\sys{QANet} \\cite{yu2018qanet} stacks multiple convolutional layers followed by the self-attention and fully connected layer, as a building block, for both question and passage encoding as well as a few more layers stacked before the final prediction. The model demonstrated state-of-the-art performance at the time (Table~\\ref{tab:squad-results}) while showing significant speed-ups.\n\nAnother research work by \\newcite{lei2018simple} proposed a lightweight recurrent unit called \\sys{Simple Recurrent Unit} (SRU) by simplifying the LSTM formulation while enabling CUDA-level optimizations for high parallelization. Their results suggest that simplified recurrence retains strong modeling capacity through layer stacking. They also demonstrate that replacing the LSTMs in our model with their \\sys{SRU} unit can improve the F1 score by 2 points while being faster for training and inference.\n\n\\subsection{Others}\n\n\\paragraph{Training objectives.} It is also possible to make further progress by improving the training objectives. It is usually straightforward to employ a cross-entropy or max-margin loss for the cloze style or multiple choice problems. However, for span prediction problems, \\newcite{xiong2018dcn+} suggest that there is a discrepancy between the cross-entropy loss of predicting two endpoints of the answer and the final evaluation metrics, which concerns the word overlap between gold answer and ground truth. For the following example:\n\n\\begin{displayquote}\n\\tf{passage}: Some believe that the Golden State Warriors team of 2017 is one of the greatest teams in NBA history \\ldots \\\\\n\\tf{question}: Which team is considered to be one of the greatest teams in NBA history? \\\\\n\\tf{ground truth answer}: the Golden State Warriors team of 2017\n\\end{displayquote}\nSpan ``Warriors'' is also a correct answer, however, from the perspective of cross entropy based training it is no better than the span ``history''. \\newcite{xiong2018dcn+} propose to use a mixed training objective which combines cross entropy loss over positions and the word overlap measure trained with reinforcement learning. Basically, they use $P^{(\\text{start})}(i)$ and $P^{(\\text{end})}(i)$ trained with cross-entroy loss to sample the start and end positions of the answer and then use the F1 score as reward function.\n\nFor the free-form answer of reading comprehension problems, there has been many recent advances in training better \\sys{seq2seq} models especially in the context of neural machine translation, such as sentence-level training~\\cite{ranzato2016sequence} and minimum risk training~\\cite{shen2016minimum}. However, we don't see many such applications in reading comprehension problems yet.\n\n\\paragraph{Data augmentation.} Data augmentation has been a very successful approach for image recognition, while it is less explored in NLP problems. \\newcite{yu2018qanet} proposed a simple technique for creating more training data for reading comprehension models. The technique is called \\ti{backtranslation} --- basically they leverage two state-of-the-art neural machine translation models: one model from English to French and the other model from French to English, and paraphrase each single sentence in the passage by running through the two models (with some modifications to the answer if needed). They obtained ~2 points gain in F1 by doing this on \\sys{SQuAD}. \\newcite{devlin2018bert} also find that joint training of \\sys{SQuAD} and \\sys{TriviaQA}~\\cite{joshi2017triviaqa} can help improve the performance on \\sys{SQuAD} modestly.\n\n\\subsection{Summary}\nSo far, we have discussed recent advances in different aspects, which, in sum, contribute to the latest progress on current reading comprehension benchmarks (esp. \\sys{SQuAD}). Which components are more important than the others? Do we need to add up all of these? Are these recent advances able to generalize to other reading comprehension tasks? How are they correlated with different capacities of language understanding? We think there isn't a clear answer to most of these questions yet and it still requires a lot of investigation.\n\n\\begin{table}[!t]\n    \\centering\n    \\begin{tabular}{p{6cm} | c l}\n    \\hline\n      \\tf{Components} & \\tf{F1 improvement} & \\tf{References} \\\\\n    \\hline\n      \\sys{Glove}$\\Rightarrow$\\sys{Fasttext} & 78.9 $\\Rightarrow$ 79.8: $+0.9$ & \\cite{mikolov2017advances} \\\\\n      Character embeddings & 75.4 $\\Rightarrow$ 77.3: $+1.9$ & \\cite{seo2017bidirectional} \\\\\n      {\\small Contextualized embeddings: \\sys{ELMo}} & 81.1 $\\Rightarrow$ 85.8: $+4.7$ & \\cite{peters2018deep} \\\\\n    \\hline\n      Question to passage attention & 73.7 $\\Rightarrow$ 77.3: $+3.6$ & \\cite{seo2017bidirectional} \\\\\n      Self-attention over passage & 76.7 $\\Rightarrow$ 79.5: $+2.8$ & \\cite{wang2017gated} \\\\\n    \\hline\n      3-layer LSTMs $\\Rightarrow$ 6-layer SRUs & 78.8 $\\Rightarrow$ 80.2: $+1.4$ & \\cite{lei2018simple} \\\\\n    \\hline\n      Mixed training objective & 82.1 $\\Rightarrow$ 83.1: $+1.0$ & \\cite{xiong2018dcn+} \\\\\n      Data augmentation & 82.7 $\\Rightarrow$ 83.8: $+1.1$ & \\cite{yu2018qanet} \\\\\n    \\hline\n    \\end{tabular}\n    \\longcaption{A summary of recent advances on \\sys{SQuAD}}{\\label{tab:impr-squad} A summary of recent advances on \\sys{SQuAD}. The numbers are taken from the corresponding papers, on the development set of \\sys{SQuAD}.}\n\\end{table}\n\nWe compiled the improvements of different components on \\sys{SQuAD} in Table~\\ref{tab:impr-squad}. We would like to caution readers that these numbers are really not directly comparable, as they are built on different model architectures and different implementations. We hope that this table at least reflects some ideas regarding the importance of these components on the \\sys{SQuAD} dataset. As is seen, all these components contribute to the final performance, more or less. The most important innovation is probably the use of contextualized word embeddings (e.g., \\sys{ELMo}), while the formulation of attention functions is also crucial. It will be important to investigate whether these advances can generalize to other reading comprehension tasks in the future.\n"
  },
  {
    "path": "chapters/rc_models/experiments.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Experiments}\n\\label{sec:sar-experiments}\n\n\\subsection{Datasets}\n\nWe evaluate our model on \\sys{CNN/Daily Mail}~\\cite{hermann2015teaching} and \\sys{SQuAD}~\\cite{rajpurkar2016squad}, the two most popular and competitive reading comprehension datasets. We have described them before in Section~\\ref{sec:deep-learning-era} regarding their importance in the development of neural reading comprehension and the way the datasets were constructed. Now we give a brief review of these datasets and the statistics.\n\n\\begin{itemize}\n\\item\nThe \\sys{CNN/Daily Mail} datasets were made from articles on the news websites CNN and Daily Mail, utilizing articles and their bullet point summaries. One bullet point is converted to a question with one entity replaced by a placeholder and the answer is this entity. The text has been run through a Google NLP pipeline. It it tokenized, lowercased, and named entity recognition and coreference resolution have been run. For each coreference chain containing at least one named entity, all items in the chain are replaced by an @entity$n$ marker, for a distinct index $n$ (Table~\\ref{tab:rc-examples} (a)). On average, both the \\sys{CNN} and \\sys{Daily Mail} contain 26.2 different entities in the article. The training, development, and testing examples were collected from the news articles at different times. The accuracy (percentage of examples predicting the correct entity) is used for evaluation.\n\n\\item\nThe \\sys{SQuAD} dataset was collected based on Wikipedia articles. 536 high-quality Wikipedia articles were sampled and crowdworkers created questions based on each individual paragraph (paragraphs shorter than 500 characters were discarded), requiring that answer must be highlighted from the paragraph (Table~\\ref{tab:rc-examples} (c)). The training/development/testing splits were made randomly based on articles (80\\% vs. 10\\% vs. 10\\%). To estimate human performance and also make evaluation more reliable, they collected a few additional answers for each question (each question in the development set has 3.3 answers on average). Exact match and macro-averaged F1 scores are used for evaluation, as we discussed in Section~\\ref{sec:evaluation}. Note that \\sys{SQuAD} 2.0~\\cite{rajpurkar2018know} was proposed more recently, which added 53,775 unanswerable questions to the original dataset and we will discuss it in Section~\\ref{sec:future-datasets}. For most of this thesis, \\sys{SQuAD} refers to \\sys{SQuAD} 1.1 unless stated otherwise.\n\\end{itemize}\n\n\n\\begin{table}[t]\n  \\centering\n  \\begin{tabular}{l | r r | r }\n  \\hline\n    & \\multicolumn{2}{c|}{cloze style} & span prediction \\\\\n    & \\tf{CNN} & \\tf{Daily Mail} & \\tf{SQuAD} \\\\\n  \\hline\n  \\#Train & 380,298 & 879,450 & 87,599 \\\\\n  \\#Dev & 3,924 & 64,835 & 10,570 \\\\\n  \\#Test & 3,198 & 53,182 & 9,533 \\\\\n  \\hline\n  Passage: avg. tokens & 761.8 & 813.1 & 134.4 \\\\\n  Question: avg. tokens & 12.5 & 14.3 & 11.3 \\\\\n  Answer: avg. tokens & 1.0 & 1.0 & 3.1 \\\\\n  \\hline\n  \\end{tabular}\n  \\longcaption{Data statistics of \\sys{CNN/Daiily Mail} and \\sys{SQuAD}}{\\label{tab:data-statistics}Data statistics of \\sys{CNN/Daily Mail} and \\sys{SQuAD}. The average numbers of tokens are computed based on the training set.}\n\\end{table}\n\n\nTable~\\ref{tab:data-statistics} gives more detailed statistics of the datasets. As it is shown, the \\sys{CNN/Daily Mail} datasets are much larger than \\sys{SQuAD} (almost one order of magnitude bigger) due to the way the datasets were constructed. The passages used in \\sys{CNN/Daily Mail} are also much longer --- 761.8 and 813.1 tokens for \\sys{CNN} and \\sys{Daily Mail} respectively, while it is 134.4 tokens for SQuAD. Finally, the answers in \\sys{SQuAD} consists of only 3.1 tokens on average, which reflects the fact the most of the \\sys{SQuAD} questions are factoid and a large portion of the answers are common nouns or named entities.\n\n\n\\subsection{Implementation Details}\n\\label{sec:imp-details}\n\nBesides different model architecture designs, implementation details also play a crucial role in the final performance of these neural reading comprehension systems. In the following we highlight a few important aspects that we haven't covered yet and finally give the model specifications that we used on the two datasets.\n\n\\paragraph{Stacked BiLSTMs.} One simple idea is to increase the depth of bidirectional LSTMs for question and passage encoding. It computes $\\mf{h}_t = [\\overrightarrow{\\mf{h}}_t; \\overleftarrow{\\mf{h}}_t] \\in \\R^{2h}$ and then regard $\\mf{h}_t$ as the input $\\mf{x}_t$ of the next layer and pass them into another BiLSTM, and so on. We generally find that stacking BiLSTMs works better than a one-layer BiLSTM and we used $3$ layers for the SQuAD experiment.\\footnote{We only used a shallow one-layer BiLSTM for the CNN/Daily Mail experiments in 2016 though.}\n\n\\paragraph{Dropout.} Dropout is an effective and widely used approach to regularization in neural networks. Simply put, dropout refers to masking out some units at random during the training process. For our model, dropout can be added to the word embeddings, input vectors and hidden vectors of every LSTM layer. Finally, the variational dropout approach \\cite{gal2016theoretically} has demonstrated to work better than the standard dropout on regularizing RNNs. The idea is to apply the same dropout mask at each time step for both inputs, outputs and recurrent layers, i.e., the same units are dropped at each time step. We suggest readers to use this variant in practice.\\footnote{We didn't include variational dropout in our published paper results but later found it useful.}\n\n\\paragraph{Handling word embeddings.} One common way (and also our default choice) to handle word embeddings is to keep the most frequent $K$ (e.g., $K = 500,000$) word types in the training set and map all other words to an $\\left<unk\\right>$ token and then use pre-trained word embeddings to initialize the $K$ words. Typically, when the training set is large enough, we fine tune all the word embeddings; when the training set is relatively small (e.g., \\sys{SQuAD}), we usually keep all the word embeddings fixed as static features. In \\newcite{chen2017reading}, we find that it helps to fine-tune the most frequent question words because the representations of these key words such as \\ti{what}, \\ti{how}, \\ti{which} could be crucial for reading comprehension systems. Some studies such as \\cite{dhingra2017comparative} demonstrated the use of pre-trained embeddings and the ways of handling out-of-vocabulary words have a large impact on the performance of reading comprehension tasks.\n\n\\paragraph{Model specifications.}\nFor all the experiments which require linguistic annotations (lemma, part-of-speech tags, named entity tags, dependency parses), we use the Stanford CoreNLP toolkit~\\cite{manning2014stanford} for preprocessing. For training all the neural models, we sort all the examples by the length of its passage, and randomly sample a mini-batch of size 32 for each update.\n\nFor the results on \\sys{CNN/Daily Mail}, we use the lowercased, 100-dimensional pre-trained \\sys{Glove} word embeddings~\\cite{pennington2014glove} trained on Wikipedia and Gigaword for initialization. The attention and output parameters are initialized from a uniform distribution between $(-0.01, 0.01)$, and the LSTM weights are initialized from a Gaussian distribution $\\mathcal{N}(0, 0.1)$. We use a 1-layer BiLSTM of hidden size $h = 128$ for \\sys{CNN} and $h = 256$ for \\sys{Daily Mail}. Optimization is carried out using vanilla stochastic gradient descent (SGD), with a fixed learning rate of $0.1$.  We also apply dropout with probability $0.2$ to the embedding layer and gradient clipping when the norm of gradients exceeds $10$.\n\nFor the results on \\sys{SQuAD}, we use 3-layer BiLSTMs with $h = 128$ hidden units for both paragraph and question encoding. We use \\sys{Adamax} for optimization as described in \\cite{kingma2014adam}. Dropout with probably $0.3$ is applied to word embeddings and all the hidden units of LSTMs. We used the $300$-dimensional \\sys{Glove} word embeddings trained from 840B Web crawl data for initialization and only fine-tune the 1000 most frequent question words.\n\nOther implementation details can be found in the following two Github repositories:\n\\begin{itemize}\n    \\item\n        \\href{https://github.com/danqi/rc-cnn-dailymail}{https://github.com/danqi/rc-cnn-dailymail} for our experiments in \\newcite{chen2016thorough}.\n    \\item\n        \\href{https://github.com/facebookresearch/DrQA}{https://github.com/facebookresearch/DrQA} for our experiments in \\newcite{chen2017reading}.\n\\end{itemize}\n\nWe also would like to caution readers that our experimental results were published in two papers (2016 and 2017) and they differ in various places. A key difference is that our results on \\sys{CNN/Daily Mail} didn't include manual features $f_{token}(p_i)$, exact match features $f_{exact\\_match}(p_i)$, aligned question embeddings $f_{align}(p_i)$ and $\\tilde{\\mf{p}}_i$ just takes the word embedding $\\mf{E}(p_i)$. Another difference is that we didn't have the attention layer in question encoding before but simply concatenated the last hidden vectors from the LSTMs in both directions. We believe that that these additions are useful on \\sys{CNN/Daily Mail} and other cloze style tasks as well, but we didn't further investigate it.\n\n\n\\subsection{Experimental Results}\n\n\\subsubsection{Results on \\sys{CNN/Daily Mail}}\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{l c c c c}\n\\toprule\n\\multirow{2}{*}{\\tf{Model}} & \\multicolumn{2}{c}{\\sys{CNN}} &  \\multicolumn{2}{c}{\\sys{Daily Mail}} \\\\\n& \\tf{Dev} & \\tf{Test} & \\tf{Dev} & \\tf{Test} \\\\\n\\midrule\n Frame-semantic model $^\\dagger$ &36.3  & 40.2 & 35.5 & 35.5 \\\\\n Word distance model $^\\dagger$ & 50.5 & 50.9 & 56.4 & 55.5 \\\\\n Deep LSTM Reader $^\\dagger$ & 55.0 & 57.0 & 63.3 & 62.2 \\\\\nAttentive Reader $^\\dagger$ & 61.6 & 63.0 & 70.5 & 69.0 \\\\\n Impatient Reader $^\\dagger$ & 61.8 & 63.8 & 69.0 & 68.0 \\\\\n\\midrule\nMemNNs (window memory) $^\\ddagger$ & 58.0 & 60.6 & N/A & N/A \\\\\nMemNNs (window memory + self-sup.) $^\\ddagger$ & 63.4 & 66.8 & N/A & N/A\\\\\nMemNNs (ensemble) $^\\ddagger$ & 66.2\\rlap{$^*$} & 69.4\\rlap{$^*$} & N/A & N/A \\\\\n\\midrule\nOur feature-based classifier & 67.1 & 67.9 & 69.1 & 68.3 \\\\\n\\midrule\nStanford Attentive Reader & 72.5 & 72.7 & 76.9 & 76.0 \\\\\nStanford Attentive Reader (ensemble) &  76.2\\rlap{$^*$} & 76.5\\rlap{$^*$} & 79.5\\rlap{$^*$} & 78.7\\rlap{$^*$} \\\\\n\\bottomrule\n\\end{tabular}\n\\longcaption{Evaluation results on CNN/Daily Mail}{\\label{tab:cnn-dm-results}Accuracy of all models on the \\sys{CNN} and \\sys{Daily Mail} datasets. Results marked $^\\dagger$ are from \\newcite{hermann2015teaching} and results marked $^\\ddagger$ are from \\newcite{hill2016goldilocks}. The numbers marked with $^*$ indicate that the results are from ensemble models.}\n\\end{table}\n\n\nTable~\\ref{tab:cnn-dm-results} presents the results that we reported in \\newcite{chen2016thorough}. We run our neural models 5 times independently with different random seeds and report average performance across the runs. We also report ensemble results which average the prediction probabilities of the 5 models. We also present the results for the feature-based classifier we described in Section~\\ref{sec:feature-models}.\n\n\\paragraph{Baselines.} We were among the earliest groups to study this first large-scale reading comprehension dataset. At the time, \\newcite{hermann2015teaching} and \\newcite{hill2016goldilocks} proposed a few baselines, both symbolic approaches and neural models, for this task. The baselines include:\n\\begin{itemize}\n    \\item\n        A \\sys{frame-semantic} model in \\newcite{hermann2015teaching}, which they run a state-of-the-art semantic parser, and extract entity-predicate triples denoted as $(e_1, V, e_2)$ from both the question and the passage, and attempt to match the correct entity using a number of heuristic rules.\n    \\item\n        A \\sys{word distance} model in \\newcite{hermann2015teaching}, in which they align the placeholder of the question with each possible entity, and compute a distance measure between the question and the passage around the aligned entity.\n    \\item\n        Several LSTM-based neural models proposed in \\newcite{hermann2015teaching}, named \\sys{Deep LSTM Reader}, \\sys{Attentive Reader} and \\sys{Impatient Reader}. The \\sys{Deep LSTM Reader} just processes the question and the passage as one sequence using a deep LSTM (without attention mechanism), and makes a prediction in the end. The \\sys{Attentive Reader} is similar in spirit to ours, as it computes an attention function between the question vector and all the passage vectors; while the \\sys{Impatient Reader} computes an attention function for all the question words and recurrently accumulates information as the model reads each question word.\n    \\item\n        \\sys{Window-based memory networks} proposed by \\newcite{hill2016goldilocks} is based on the memory network architecture \\cite{weston2015memory}. We think the model is also similar to ours and the biggest difference is their way of encoding passages: they only use a 5-word context window when evaluating a candidate entity and they use a positional unigram approach to encode the contextual embeddings. If a window consists of $5$ words $x_1, x_2, \\ldots, x_5$, then it is encoded as $\\sum{\\mf{E}_i(x_i)}$, resulting in $5$ separate embedding matrices to learn. They encode the $5$-word window surrounding the placeholder in a similar way and all other words in the question text are ignored. In addition, they simply use a dot product to compute the ``relevance'' between the question and a contextual embedding.\n\\end{itemize}\n\nAs seen in Table~\\ref{tab:cnn-dm-results}, our feature-based classifier obtains 67.9\\% accuracy on the \\sys{CNN} test set and 68.3\\% accuracy on the \\sys{Daily Mail} test set. It significantly outperforms any of the symbolic approaches reported in \\newcite{hermann2015teaching}. We feel that their frame-semantic model is not suitable for these tasks due to the poor coverage of the parser and is not representative of what a straightforward NLP system can achieve. Indeed, the frame-semantic model is even markedly inferior to the word distance model. To our surprise, our feature-based classifier even outperforms all the neural network systems in \\newcite{hermann2015teaching} and the best single-system result reported from \\newcite{hill2016goldilocks}.   Moreover, our single-model neural network surpasses the previous results by a large margin (over 5\\%), pushing up the state-of-the-art accuracies to 72.7\\% and 76.0\\% respectively. The ensembles of 5 models consistently bring further 2-4\\% gains.\n\n\\subsubsection{Results on \\sys{SQuAD}}\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{p{8.5cm} c c c c}\n\\hline\n \\bf Method &  \\multicolumn{2}{c}{\\bf Dev} & \\multicolumn{2}{c}{\\bf Test} \\\\\n&  \\tf{EM} & \\tf{F1} & \\tf{EM} & \\tf{F1} \\\\\n\\hline\nLogistic regression \\cite{rajpurkar2016squad} & 40.0 & 51.0 & 40.4 & 51.0 \\\\\n\\hline\nMatch-LSTM~\\cite{wang2017machine} &  64.1 & 73.9 & 64.7 & 73.7 \\\\\nRaSoR~\\cite{lee2016learning} & 66.4 & 74.9 & 67.4 & 75.5 \\\\\nDCN~\\cite{xiong2017dynamic} & 65.4 & 75.6 & 66.2 & 75.9 \\\\\nBiDAF~\\cite{seo2017bidirectional}  & 67.7 & 77.3 & 68.0 & 77.3 \\\\\n\\hline\n\\tf{Our model}~\\cite{chen2017reading} & 69.5 & 78.8 & 70.0 &  79.0\\\\\n\\hline\nR-NET~\\cite{wang2017gated} & 71.1 & 79.5 & 71.3 & 79.7 \\\\\nBiDAF + self-attention~\\cite{peters2018deep} & N/A & N/A & 72.1 & 81.1 \\\\\nFusionNet~\\cite{huang2018fusionnet} & N/A & N/A & 76.0 & 83.9 \\\\\nQANet~\\cite{yu2018qanet} & 73.6 & 82.7 & N/A & N/A \\\\\nSAN~\\cite{liu2018stochastic} & 76.2 & 84.1 & 76.8 & 84.4 \\\\\n{\\small BiDAF + self-attention + ELMo}~\\cite{peters2018deep} & N/A & N/A & 78.6 & 85.8 \\\\\nBERT~\\cite{devlin2018bert} & 84.1 & 90.9 & N/A & N/A \\\\\n\\hline\nHuman performance \\cite{rajpurkar2016squad} & 80.3 & 90.5 & 82.3 & 91.2 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\longcaption{Evaluation results on SQuAD}{\\label{tab:squad-results} Evaluation results on the SQuAD dataset (single model only). The results below ``our model'' were released after we finished the paper in Feb 2017. We only list representative models and report the results from the published papers. For a fair comparison, we didn't include the results which use other training resources (e.g., TriviaQA) or data augmentation techniques, except pre-trained language models, but we will discuss them in Section~\\ref{sec:advances}. }\n\\end{table}\n\nTable~\\ref{tab:squad-results} presents our evaluation results on both the development and testing sets. SQuAD has been a very competitive benchmark since it was created and we only list a few representative models and the single-model performance. It is well known that the ensemble models can further improve the performance by a few points. We also included results from the logistic regression baseline (i.e., feature-based classifiers) created by the original authors \\cite{rajpurkar2016squad}.\n\n\nOur system can achieve 70.0\\% exact match and 79.0\\% F1 scores on the test set, which surpassed all the published results and matched the top performance on the SQuAD leaderboard\\footnote{\\href{https://stanford-qa.com}{https://stanford-qa.com}.} at the time we wrote the paper~\\cite{chen2017reading}. Additionally, we think that our model is conceptually simpler than most of the existing systems. Compared to the logistic regression baseline, which gets $\\text{F1} = 51.0$, this model is already close to a 30\\% absolute improvement and it is a big win for neural models.\n\nSince then, \\sys{SQuAD} has received tremendous attention and great progress has been made on this dataset, as seen in Table~\\ref{tab:squad-results}. Recent advances include pre-trained language models for initialization, more fine-grained attention mechanisms, data augmentation techniques and even better training objectives. We will discuss them in Section~\\ref{sec:advances}.\n\n\n\\subsubsection{Ablation studies}\n\n\\begin{table}[h]\n\t\\begin{center}\n\t\\begin{tabular}{l | l}\n    \\hline\n    \\bf Features & \\bf F1\\\\\n    \\hline\n    Full & 78.8 \\\\\n    \\hline\n    No $f_{token}$ & 78.0 (-0.8)\\\\\n    No $f_{exact\\_match}$ & 77.3 (-1.5)\\\\\n    No $f_{aligned}$ & 77.3 (-1.5)\\\\\n    No $f_{aligned}$ and $f_{exact\\_match}$ & 59.4 (-19.4) \\\\\n    \\hline\n    \\end{tabular}\n    \\end{center}\n    \\longcaption{Feature ablation analysis on SQuAD}{\\label{tab:feature-ablation}Feature ablation analysis of the paragraph representations of our model. Results are reported on the SQuAD development set.}\n\\end{table}\n\nIn \\newcite{chen2017reading}, we conducted an ablation analysis on the components of the passage representations. As shown in Table~\\ref{tab:feature-ablation}, all the components contribute to the performance of our final system. We find that, without the aligned question embeddings (only word embeddings and a few manual features), our system is still able to achieve F1 over ~77\\%. The effectiveness of exact match features $f_{exact\\_match}$ also indicates that there are a lot of words overlapping between the passage and the question on this dataset. More interestingly, if we remove both $f_{aligned}$ and $f_{exact\\_match}$, the performance drops dramatically, so we conclude that both features play a similar but complementary role in the feature representation, like the hard and soft alignments between question and passage words.\n\n% \\subsubsection{Attention visualization}\n% \\red{TODO}\n\n\n\\subsection{Analysis: What Have the Models Learned?}\n\nIn \\newcite{chen2016thorough}, we attempted to understand better what these models have actually learned, and what depth of language understanding is needed to solve these problems. We approach this by doing a careful hand-analysis of 100 randomly sampled examples from the development set of the \\sys{CNN} dataset.\n\nWe roughly classify them into the following categories (if an example satisfies more than one category, we classify it into the earliest one):\n\\begin{description}\n   \\item[\\tf{Exact match}] The nearest words around the placeholder are also found in the passage surrounding an entity marker; the answer is self-evident.\n   \\item[\\tf{Sentence-level paraphrasing}] The question text is entailed\\slash rephrased by \\ti{exactly} one sentence in the passage, so the answer can definitely be identified from that sentence.\n   \\item[\\tf{Partial clue}] In many cases, even though we cannot find a complete semantic match between the question text and some sentence, we are still able to infer the answer through partial clues, such as some word/concept overlap.\n   \\item[\\tf{Multiple sentences}] Multiple sentences must be processed to infer the correct answer.\n   \\item[\\tf{Coreference errors}] It is unavoidable that there are many coreference errors in the dataset. This category includes those examples with critical coreference errors for the answer entity or key entities appearing in the question. Basically we treat this category as ``not answerable''.\n   \\item[\\tf{Ambiguous or hard}] This category includes examples for which we think humans are not able to obtain the correct answer (confidently).\n\\end{description}\n\nTable~\\ref{tab:cnn-ex-breakdown} provides our estimate of the percentage for each category, and Figure~\\ref{fig:cnn-examples} presents one representative example from each category. We observe that \\ti{paraphrasing} accounts for 41\\% of the examples and 19\\% of the examples are in the \\ti{partial clue} category. Adding the most simple \\ti{exact match} category, we hypothesize a large portion (73\\% in this subset) of the examples are able to be answered by identifying the most relevant (single) sentence and inferring the answer based upon it. Additionally, only 2 examples require multiple sentences for inference. This is a lower rate than we expected and this suggests that the dataset requires less reasoning than previously thought. To our surprise, “coreference errors” and “ambiguous/hard” cases account for 25\\% of this sample set, based on our manual analysis, and this certainly will be a barrier for training models with an accuracy much above 75\\% (although, of course, a model can sometimes make a lucky guess). In fact, our ensemble neural network model is already able to achieve 76.5\\% on the development set, and we think that the prospect for further improving on this dataset is small.\n\n\\begin{figure}[p]\n\\centering\n\\begin{tabular}{l p{4.5cm} p{6.5cm}}\n\\toprule\nCategory & Question & Passage \\\\\n\\midrule\nExact Match & \\ti{it 's clear @entity0 is leaning toward} {\\tf{@placeholder}} ,  says an expert who monitors @entity0 & \\ldots @entity116 , who follows @entity0 's operations and propaganda closely , recently told @entity3 , \\ti{it 's clear @entity0 is leaning toward} \\tf{@entity60}  in terms of doctrine , ideology and an emphasis on holding territory after operations . \\ldots  \\\\\n\\midrule\nParaphrasing & {\\tf{@placeholder} says he understands why @entity0 wo n't play at his tournament} &  \\ldots @entity0 called me personally to let me know that he would n't be playing here at @entity23 , \" \\tf{@entity3} said on his @entity21 event 's website . \\ldots \\\\\n\\midrule\nPartial clue & a tv movie based on @entity2 's book \\tf{@placeholder} casts a @entity76 actor as @entity5 & \\ldots  to @entity12  @entity2 professed that his \\tf{@entity11} is not a religious book . \\ldots \\\\\n\\midrule\nMultiple sent. &  he 's doing a his - and - her duet all by himself ,  @entity6 said of \\tf{@placeholder} &  \\ldots we got some groundbreaking performances , here too , tonight ,  @entity6 said . we got \\tf{@entity17} , who will be doing some musical performances . he 's doing a his - and - her duet all by himself .  \\ldots \\\\\n\\midrule\nCoref. Error & rapper \\tf{@placeholder} \" disgusted , \" cancels upcoming show for @entity280 & \\ldots with hip - hop star \\tf{@entity246} saying on @entity247 that he was canceling an upcoming show for the @entity249 . \\ldots  (but @entity249 = @entity280 = SAEs)\\\\\n\\midrule\nHard & pilot error and snow were reasons stated for \\tf{@placeholder} plane crash  & \\ldots a small aircraft carrying \\tf{@entity5} , @entity6 and @entity7 the @entity12  @entity3 crashed a few miles from @entity9 , near @entity10 , @entity11 . \\ldots \\\\\n\\bottomrule\n\\end{tabular}\n\\longcaption{Some representative examples from each category}{\\label{fig:cnn-examples}Some representative examples from each category on the \\sys{CNN} dataset.}\n\\end{figure}\n\n\\begin{table}[!t]\n  \\centering\n    \\begin{tabular}{l  l  r}\n      \\toprule\n    \\tf{\\#} & \\tf{Category} &    \\\\\n    \\midrule\n    1 & Exact match & 13\\%   \\\\\n    2 & Paraphrasing & 41\\% \\\\\n    3 & Partial clue & 19\\%  \\\\\n    4 & Multiple sentences & 2\\%  \\\\\n    \\midrule\n    5 & Coreference errors & 8\\% \\\\\n    6 & Ambiguous / hard &  17\\% \\\\\n    \\bottomrule\n    \\end{tabular}\n    \\longcaption{An estimate of the breakdown of \\sys{CNN} examples}{\\label{tab:cnn-ex-breakdown}An estimate of the breakdown of the dataset into classes, based on the analysis of our sampled 100 examples from the \\sys{CNN} dataset.}\n\\end{table}\n\n\\begin{figure}[!t]\n    \\center\n    \\includegraphics[scale=0.6]{img/cnn_analysis.png}\n    \\longcaption{The per-category performance of our two systems}{\\label{fig:category-performance} The per-category performance of our two systems: the \\sys{Stanford Attentive Reader} and the feature-based classifier, on the sampled 100 examples of the \\sys{CNN} dataset.}\n\\end{figure}\n\n% \\begin{table}[h]\n%    \\centering\n%     \\begin{tabular}{@{} l  r @{\\hspace*{0.25em}} r r @{\\hspace*{0.25em}} r @{}}\n%       \\toprule\n%      {Category} &  \\multicolumn{2}{c}{{Classifier}} & \\multicolumn{2}{c}{{Neural net}} \\\\\n%     \\midrule\n%      Exact match & 13 & (100.0\\%) & 13 & (100.0\\%) \\\\\n%      Paraphrasing &  32 & (78.1\\%) & 39 & (95.1\\%) \\\\\n%      Partial clue & 14 & (73.7\\%) &  17 & (89.5\\%) \\\\\n%      Multiple sentences &  1 & (50.0\\%) & 1 & (50.0\\%) \\\\\n%     \\midrule\n%      Coreference errors &  4 & (50.0\\%) & 3 & (37.5\\%)\\\\\n%      Ambiguous / hard &  2 & (11.8\\%) & 1 & (5.9\\%)  \\\\\n%      \\midrule\n%      All & 66 & (66.0\\%) & 74 & (74.0\\%) \\\\\n%     \\bottomrule\n%     \\end{tabular}\n%     \\longcaption{The per-category performance of our two systems}{\\label{tab:category-performance} The per-category performance of our two systems: the \\sys{Stanford Attentive Reader} and the feature-based classifier, on the sampled 100 examples of the \\sys{CNN} dataset.}\n% \\end{table}\n\n\nLet's further take a closer look at the per-category performance of our neural network and feature-based classifier, based on the above categorization. As shown in Figure~\\ref{fig:category-performance}, we have the following observations: (i)~The exact-match cases are quite simple and both systems get 100\\% correct. (ii)~For the ambiguous\\slash hard and entity-linking-error cases, meeting our expectations, both of the systems perform poorly. (iii)~The two systems mainly differ in the paraphrasing cases, and some of the ``partial clue'' cases. This clearly shows how neural networks are better capable of learning semantic matches involving paraphrasing or lexical variation between the two sentences. (iv)~We believe that the neural network model already achieves near-optimal performance on all the single-sentence and unambiguous cases.\n\nTo sum up, we find that neural networks are certainly more powerful at recognizing lexical matches and paraphrases compared to conventional feature-based models; while it is still unclear whether they also win out on the examples which require more complex textual reasoning as the current datasets are still quite limited in that respect.\n"
  },
  {
    "path": "chapters/rc_models/feature_classifier.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Previous Approaches: Feature-based Models}\n\\label{sec:feature-models}\n\n% \\red{TODO: What is the space of possible entities? How do you keep it from being too large?}\n\nWe first describe a strong feature-based model that we built in \\newcite{chen2016thorough} for cloze style problems, in particular, the \\sys{CNN/Daily Mail} dataset~\\cite{hermann2015teaching}. We will then discuss similar models built for multiple choice and span prediction problems.\n\nFor the cloze style problems, the task is formulated as predicting the correct entity $a \\in \\mathcal{E}$ that can fill in the blank of the question $q$ based on reading the passage $p$ (one example can be found in Table~\\ref{tab:rc-examples}), where $\\mathcal{E}$ denotes the candidate set of entities. Conventional linear, feature-based classifiers usually need to construct a feature vector $f_{{p}, {q}}(e) \\in \\R^d$ for each candidate entity $e \\in \\mathcal{E}$, and to learn a weight vector $\\mf{w} \\in \\R^d$ such that the correct answer $a$ is expected to rank higher than all other candidate entities:\n\\begin{equation}\n\\mf{w}^{\\intercal}f_{p, q}(a) > \\mf{w}^{\\intercal}f_{{p}, {q}}(e), \\forall e \\in \\mathcal{E} \\setminus \\{{a}\\},\n\\end{equation}\n\nAfter all the feature vectors are constructed for each entity $e$, we can then apply any popular machine learning algorithms (e.g., logistic regression or SVM). In \\newcite{chen2016thorough}, we chose to use \\sys{LambdaMART}~\\cite{wu2010adapting}, as it is naturally a ranking problem and forests of boosted decision trees have been very successful lately.\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{l p{14cm}}\n\\toprule\n\\tf{\\#} & \\tf{Feature} \\\\\n\\midrule\n1 & Whether entity $e$ occurs in the passage. \\\\\n2 & Whether entity $e$ occurs in the question. \\\\\n3 & The \\tf{frequency} of entity $e$ in the passage. \\\\\n4 & The \\tf{first position} of occurrence of entity $e$ in the passage. \\\\\n5 & \\tf{Word distance}: we align the placeholder with each occurrence of entity $e$, and compute the average minimum distance of each non-stop question word from the entity in the passage. \\\\\n6 & \\tf{Sentence co-occurrence}: whether entity $e$ co-occurs with another entity or verb that appears in the question, in some sentence of the passage. \\\\\n7 & \\tf{$n$-gram exact match}: whether there is an exact match between the text surrounding the placeholder and the text surrounding entity $e$. We have features for all combinations of matching left and/or right one or two words. \\\\\n8 & \\tf{Dependency parse match}: we dependency parse both the question and all the sentences in the passage, and extract an indicator feature of whether $w \\xrightarrow{r} \\text{@placeholder}$ and $w \\xrightarrow{r} e$ are both found; similar features are constructed for $\\text{@placeholder} \\xrightarrow{r} w$ and $e \\xrightarrow{r} w$. \\\\\n\\bottomrule\n\\end{tabular}\n\\longcaption{Features used in our entity-centric classifier}{\\label{tab:classifier-features}Features used in our entity-centric classifier in \\newcite{chen2016thorough}.}\n\\end{table}\n\nThe key question left is how can we build useful feature vectors from the passage $p$, the question $q$ and every entity $e$? Table~\\ref{tab:classifier-features} lists 8 sets of features that we proposed for the \\sys{CNN/Daily Mail} task. As shown in the table, these features are well designed and characterize the information of the entity (e.g., frequency, position and whether it is a question/passage word) and how they are aligned with the passage/question (e.g., co-occurrence, distance, linear and syntactic matching). Some features (\\#6 and \\#8) also rely on linguistic tools such as dependency parsing and part-of-speech tagging (deciding whether a word is a verb or not).  Generally speaking, for non-neural models, how to construct a useful set of features always remains as a challenge. Useful features need to be informational and well-tailored to specific tasks, while not too sparse to generalize well from the training set. We have argued before in Sec~\\ref{sec:ml-approaches} that this is a common problem in most of the feature-based models. Also, using the off-the-shelf linguistic tools makes the model more expensive and their final performance depends on the the accuracy level of these annotations.\n\n\\newcite{rajpurkar2016squad} and \\newcite{joshi2017triviaqa} also attempted to build feature-based models for the \\sys{SQuAD} and \\sys{TriviaQA} datasets respectively. The models are similar in spirit to ours, except that for these span prediction tasks, they need to first decide on a set of possible answers.\nFor \\sys{SQuAD}, \\newcite{rajpurkar2016squad} consider all constituents in parses generated by Stanford CoreNLP~\\cite{manning2014stanford} as candidate answers; while for \\sys{TriviaQA}, \\newcite{joshi2017triviaqa} consider all $n$-gram ($1 \\leq n \\leq 5$) that occurs in the sentences which contain at least one word in common with the question. They also tried to add more lexicalized features and labels from constituency parses. Other attempts have been made for multiple choice problems such as \\cite{wang2015machine} for the \\sys{MCTest} dataset and a rich set of features have been used including semantic frames, word embeddings and coreference resolution.\n\nWe will demonstrate the empirical results of these feature-based classifiers and compare them to the neural models in Section~\\ref{sec:sar-experiments}.\n"
  },
  {
    "path": "chapters/rc_models/intro.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n% \\section{Introduction}\n\nIn this chapter, we will cover the essence of neural network models: from the basic building blocks, to more recent advances.\n\nBefore delving into the details of neural models, we give a brief introduction to non-neural, feature-based models for reading comprehension in Section~\\ref{sec:feature-models}. In particular, we describe a model that we built in \\newcite{chen2016thorough}. We hope this will give readers a better sense about how these two approaches differ fundamentally.\n\nIn Section~\\ref{sec:sar}, we present a neural approach to reading comprehension called \\sys{The Stanford Attentive Reader}, which we first proposed in \\newcite{chen2016thorough} for the cloze style reading comprehension tasks, and then later adapted to the span prediction problems \\cite{chen2017reading} for \\sys{SQuAD}. We first briefly review the basic building blocks of modern neural NLP models, and then describe how our model is built on top of them. We discuss its extensions to the other types of reading comprehension problems in the end.\n\nNext we present the empirical results of our model on the \\sys{CNN/Daily Mail} and the \\sys{SQuAD} datasets, and provide more implementation details in Section~\\ref{sec:sar-experiments}. We further conduct careful error analyses to help us better understand: 1) which components are most important for final performance; 2) where the neural models excel compared to non-neural feature-based models empirically.\n\nFinally, we summarize recent advances in neural reading comprehension in Section~\\ref{sec:advances}.\n\n% This chapter is going to cover the following topics:\n% \\begin{itemize}\n%     \\item\n%        Talk about non-neural approaches and use my baseline in the ACL'16 paper as an example\n%     \\item\n%           Introduce SAR (and its variants on different RC tasks) -- I am hoping to give more intuitions (\\red{how?})\n%     \\item\n%        Probably need to give some background of neural NLP: word embeddings and recurrent neural networks etc\n%     \\item\n%        Talk about experiments on CNN/Daily Mail and SQuAD: the architectures are slightly different but it should be fine...\n%     \\item\n%        Analysis: 1) ablation studies of SQuAD from the ACL17 paper 2) comparison between the neural approach and non-neural approach on the CNN dataset\n%     \\item\n%        Further advances: 1) word representations 2) alternatives of RNNs 3) attention mechanisms 4) better training objectives\n% \\end{itemize}\n"
  },
  {
    "path": "chapters/rc_models/sar.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{A Neural Approach: The Stanford Attentive Reader}\n\\label{sec:sar}\n\n\\subsection{Preliminaries}\nIn the following, we outline a minimal set of elements and the key ideas which form the basis of modern neural NLP models. For more details, we refer readers to \\cite{cho2015natural,goldberg2017neural}.\n\n\\subsubsection*{Word embeddings}\nThe first key idea is to represent words as low-dimensional (e.g., 300), real-valued vectors. Before the deep learning age, it was common to represent a word as an index into the vocabulary, which is a notational variant of using one-hot word vectors: each word is represented as a high-dimensional, sparse vector where only one entry of that word is 1 and all other entires are 0's:\n\\begin{eqnarray*}\n\\mf{v}_{\\text{car}} = [0, 0, \\ldots, 0, 0, 1, 0, \\ldots, 0]^{\\intercal} \\\\\n\\mf{v}_{\\text{vehicle}} = [0, 1, \\ldots, 0, 0, 0, 0, \\ldots, 0]^{\\intercal}\n\\end{eqnarray*}\n\nThe biggest problem with these sparse vectors is that they don't share any semantic similarity between words, i.e., for any pair of different words $a, b$, $\\cos(\\mf{v}_a, \\mf{v}_b) = 0$. Low-dimensional word embeddings effectively alleviated this problem and similar words can be encoded as similar vectors in geometry space: $\\cos(\\mf{v}_{\\text{car}}, \\mf{v}_{\\text{vechicle}}) < \\cos(\\mf{v}_{\\text{car}}, \\mf{v}_{\\text{man}})$.\n\nThese word embeddings can be effectively learned from large unlabeled text corpora, based on the assumption that words occur in similar contexts tend to have similar meanings (a.k.a. the \\ti{distributional hypothesis}). Indeed, learning word embeddings from text has a long-standing history and has been finally popularized by recent scalable algorithms and released sets of pretrained word embeddings such as \\sys{word2vec}~\\cite{mikolov2013distributed}, \\sys{glove}~\\cite{pennington2014glove} and \\sys{fasttext}~\\cite{bojanowski2017enriching}. They have become the mainstay of modern NLP systems.\n\n\\subsubsection*{Recurrent neural networks}\nThe second important idea is the use of recurrent neural networks (RNNs) to model sentences or paragraphs in NLP. \\ti{Recurrent neural networks} are a class of neural networks which are suitable to handle sequences of variable length. More concretely, they apply a parameterized function recursively on a sequence $\\mf{x}_1, \\ldots, \\mf{x}_n$:\n\\begin{equation}\n    \\mf{h}_t = f(\\mf{h}_{t-1}, \\mf{x}_t; \\Theta)\n\\end{equation}\nFor NLP applications, we represent a sentence or a paragraph as a sequence of words where each word is transformed into a vector (usually through pre-trained word embeddings): $\\mf{x} = \\mf{x}_1, \\mf{x}_2, \\ldots, \\mf{x}_n \\in \\R^d$ and $\\mf{h}_t \\in \\R^h$ can be used to model the contextual information of $\\mf{x}_{1:t}$.\n\nVanilla RNNs take the form of\n\\begin{equation}\n    \\mf{h}_t = \\tanh(\\mf{W}^{hh}\\mf{h}_{t-1} + \\mf{W}^{hx}\\mf{x}_t + \\mf{b}),\n\\end{equation}\nwhere $\\mf{W}^{hh} \\in \\R^{h \\times h}, \\mf{W}^{hx} \\in \\R^{h\\times d}$, $\\mf{b} \\in \\R^h$ are the parameters to be learned. To ease the optimization, many variants of RNNs have been proposed. Among them, long short-term memory networks (LSTMs)~\\cite{hochreiter1997} and gated recurrent units (GRUs)~\\cite{cho2014learning} are the commonly used ones. Arguably, LSTM is still the most competitive RNN variant for NLP applications today and also our default choice for the neural models that we will describe. Mathematically, LSTMs can be formulated as follows:\n\\begin{eqnarray}\n    \\mf{i}_t & = & \\sigma(\\mf{W}^{ih}\\mf{h}_{t-1} + \\mf{W}^{ix}\\mf{x_t} + \\mf{b}^{i}) \\\\\n    \\mf{f}_t & = & \\sigma(\\mf{W}^{fh}\\mf{h}_{t-1} + \\mf{W}^{fx}\\mf{x_t} + \\mf{b}^{f}) \\\\\n    \\mf{o}_t & = & \\sigma(\\mf{W}^{oh}\\mf{h}_{t-1} + \\mf{W}^{ox}\\mf{x_t} + \\mf{b}^{o}) \\\\\n    \\mf{g}_t & = & \\tanh(\\mf{W}^{gh}\\mf{h}_{t-1} + \\mf{W}^{gx}\\mf{x_t} + \\mf{b}^{g}) \\\\\n    \\mf{c}_t & = & \\mf{f}_t \\odot \\mf{c}_{t-1} + \\mf{i}_t \\odot \\mf{g}_t \\\\\n    \\mf{h}_t & = & \\mf{o}_t \\odot \\tanh(\\mf{c}_t),\n\\end{eqnarray}\nwhere $\\mf{W}^{ih}, \\mf{W}^{fh}, \\mf{W}^{oh}, \\mf{W}^{gh} \\in \\R^{h \\times h}$, $\\mf{W}^{ix}, \\mf{W}^{fx}, \\mf{W}^{ox}, \\mf{W}^{gx} \\in \\R^{h \\times d}$ and $\\mf{b}^{i}, \\mf{b}^{f}, \\mf{b}^{o}, \\mf{b}^{g} \\in \\R^h$ are the parameters to be learned.\n\nFinally, a useful elaboration of an RNN is a \\ti{bidirectional RNN}. The idea is simple: for a sentence or a paragraph, $\\mf{x} = \\mf{x}_1, \\ldots, \\mf{x}_n$, a forward RNN is used from left to right and then another backward RNN is used from right to left:\n\\begin{eqnarray}\n    \\overrightarrow{\\mf{h}}_t & = & f(\\overrightarrow{\\mf{h}}_{t-1}, \\mf{x}_t; \\overrightarrow{\\Theta}), \\quad t = 1, \\ldots, n\\\\\n    \\overleftarrow{\\mf{h}}_t & = & f(\\overleftarrow{\\mf{h}}_{t+1}, \\mf{x}_t; \\overleftarrow{\\Theta}), \\quad t = n, \\ldots, 1\n\\end{eqnarray}\nWe define $\\mf{h}_t = [\\overrightarrow{\\mf{h}}_t; \\overleftarrow{\\mf{h}}_t] \\in \\R^{2h}$ which takes the concatenation of the hidden vectors from the RNNs in both directions. These representations can usefully encode information from both the left context and the right context and are suitable for general-purpose trainable feature-extracting component of many NLP tasks.\n\n\\subsubsection*{Attention mechanism}\nThe third important component is an attention mechanism. It was first introduced in the \\textit{sequence-to-sequence} (seq2seq) models \\cite{sutskever2014sequence} for neural machine translation \\cite{bahdanau2015neural,luong2015effective} and has later been extended to other NLP tasks.\n\nThe key idea is, if we want to predict the sentiment of a sentence, or translate a sentence of one language to the other, we usually apply recurrent neural networks to encode a single sentence (or the source sentence for machine translation): $\\mf{h}_1, \\mf{h}_2, \\ldots, \\mf{h}_n$ and use the last time step $\\mf{h}_n$ to predict the sentiment label or the first word in the target language:\n\n\\begin{equation}\n  P(Y = y) = \\frac{\\exp(\\mf{W}_y\\mf{h}_n)}{\\sum_{y'}{\\exp\\left(\\mf{W}_{y'}\\mf{h}_n\\right)}}\n\\end{equation}\n\nThis requires the model to be able to compress all the necessary information of a sentence into a fixed-length vector, which causes an information bottleneck in improving performance. An attention mechanism is designed to solve this problem: instead of squashing all the information into the last hidden vector, it looks at the hidden vectors at all time steps and chooses a subset of these vectors adaptively:\n\\begin{eqnarray}\n    \\alpha_i & = & \\frac{\\exp\\left(g(\\mf{h}_i, \\mf{w}; \\Theta_g)\\right)}{\\sum_{i'=1}^{n}\\exp\\left(g(\\mf{h}_{i'}, \\mf{w}; \\Theta_g)\\right)} \\label{eq:attention} \\\\\n    \\mf{c} & = & \\sum_{i=1}^{n}{\\alpha_i \\mf{h}_i} \\label{eq:context-vector}\n\\end{eqnarray}\n\nHere $\\mf{w}$ can be a task-specific vector learned from the training process, or taken as the current target hidden state in machine translation and $g$ is a parameteric function which can be chosen in various ways, such as dot product, bilinear product, or one hidden layer of MLP:\n\\begin{eqnarray}\n    g_{\\text{dot}}(\\mf{h}_i, \\mf{w}) &=& {\\mf{h}_i}^{\\intercal}\\mf{w} \\\\\n    g_{\\text{bilinear}}(\\mf{h}_i, \\mf{w}) &=& {\\mf{h}_i}^\\intercal\\mf{W}\\mf{w} \\\\\n    g_{\\text{MLP}}(\\mf{h}_i, \\mf{w}) &=& {\\mf{v}}^\\intercal\\tanh(\\mf{W}^h\\mf{h}_i + \\mf{W}^w\\mf{w}) \\label{eq:mlp-att}\n\\end{eqnarray}\n\nRoughly, an attention mechanism computes a similarity score for each $\\mf{h}_i$ and then a softmax function is applied which returns a discrete probability distribution over all the time steps. Thus $\\alpha$ essentially captures which parts of the sentence are indeed relevant and $\\mf{c}$ aggregates over all the time steps with a weighted sum and can be used for final prediction. We are not going into more details and interested readers are referred to \\newcite{bahdanau2015neural,luong2015effective}.\n\nAttention mechanisms have been proved widely effective in numerous applications and become an integral part of neural NLP models. Recently, \\newcite{parikh2016decomposable} and \\newcite{vaswani2017attention} conjectured that attention mechanisms don't have to be used in conjunction with recurrent neural networks and can be built purely based on word embeddings and feed-forward networks, while providing minimal sequence information. This class of models usually requires less parameters and is more parallelizable and scalable --- in particular, the \\sys{Transformer} model proposed in \\newcite{vaswani2017attention} has become a recent trend and we will discuss it more in Section~\\ref{sec:alt-lstms}.\n\n\\subsection{The Model}\nAt this point, we are equipped with all the building blocks. How can we build effective neural models out of them for reading comprehension? What are the key ingredients? Next we introduce our model: the \\sys{Stanford Attentive Reader}. Our model is inspired by the \\sys{Attentive Reader} described in \\newcite{hermann2015teaching} and other concurrent works, with a goal of making the model simple yet powerful. We first describe its full form for span prediction problems that we introduced in \\newcite{chen2017reading} and then later we discuss its other variants.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[height=8cm]{img/drqa_reader.pdf}\n\\end{center}\n\\longcaption{A full model of \\sys{Stanford Attentive Reader}}{\\label{fig:sar} A full model of \\sys{Stanford Attentive Reader}. Image courtesy: \\\\ \\href{https://web.stanford.edu/~jurafsky/slp3/23.pdf}{https://web.stanford.edu/~jurafsky/slp3/23.pdf}.}\n\\end{figure}\n\nLet's first recap the setting of span-based reading comprehension problems: Given a single passage $p$ consisting of $l_p$ tokens $(p_1, p_2, \\ldots, p_{l_p})$ and a question $q$ consisting of $l_q$ tokens $(q_1, q_2, \\ldots, q_{l_q})$, the goal is to predict a span $(a_{\\text{start}}, a_{\\text{end}})$ where $1 \\leq a_{\\text{start}} \\leq a_{\\text{end}} \\leq l_p$ so that the corresponding string $p_{a_{\\text{start}}}, p_{a_{\\text{start}} + 1}, \\ldots, p_{a_{\\text{end}}}$ gives the answer to the question.\n\nThe full model is illustrated in Figure~\\ref{fig:sar}. At a high level, the model first builds a vector representation for the question and builds a vector representation for each token in the passage. It then computes a similarity function between the question and its passage word in context, and then uses the question-passage similarity scores to decide the starting and ending positions of the answer span. The model builds on top of the low-dimensional, pre-trained word embeddings for each word in the passage and question (with linguistic annotations optionally). All the parameters for passage/question encoding and similarity functions are optimized jointly for the final answer prediction. Let's go into further details of each component:\n\n\\subsubsection*{Question encoding}\n\\label{sec:question-encoding}\nThe question encoding is relatively simple: we first map each question word $q_i$ into its word embedding $\\mf{E}(q_i) \\in \\R^d$ and then we apply a bi-directional LSTM on top of them and finally obtain:\n\\begin{equation}\n    \\mf{q}_{1}, \\mf{q}_2, \\ldots, \\mf{q}_{l_q} = \\text{BiLSTM}(\\mf{E}(q_1), \\mf{E}(q_2), \\ldots, \\mf{E}(q_{l_q}); \\Theta^{(q)}) \\in \\R^{h}\n\\end{equation}\n\nWe then aggregate these hidden units into one single vector through an attention layer:\n\\begin{eqnarray}\n    b_j & = & \\frac{\\exp({\\mf{w}^{q}}^\\intercal \\mf{q}_j)}{\\sum_{j'}{\\exp({\\mf{w}^{q}}^\\intercal \\mf{q}_{j'})}} \\\\\n    \\mf{q} & = & \\sum_j{b_j \\mf{q}_j}\n\\end{eqnarray}\n$b_j$ measures the importance of each question word and $\\mf{w}^{q} \\in \\R^h$ is a weight vector to be learned. Therefore, $\\mf{q} \\in \\R^h$ is the final vector representation for the question. Indeed, it was simpler (and also common) to represent $\\mf{q}$ as the concatenation of the last hidden vector from the LSTMs in both directions. However, based on the empirical performance, we find that adding this attention layer helps consistently as it adds more weight to the more relevant question words.\n\n\\subsubsection*{Passage encoding}\nPassage encoding is similar, as we also first form an input representation $\\tilde{\\mf{p}}_i \\in \\R^{\\tilde{d}}$ for each word in the passage and pass them through another bidirectional LSTM:\n\\begin{equation}\n  \\label{eq:passage-lstm}\n    \\mf{p}_{1}, \\mf{p}_2, \\ldots, \\mf{p}_{l_p} = \\text{BiLSTM}\\left(\\tilde{\\mf{p}}_1, \\tilde{\\mf{p}}_2, \\ldots, \\tilde{\\mf{p}}_{l_p}; \\Theta^{(p)}\\right) \\in \\R^{h}\n\\end{equation}\n\nThe input representation $\\tilde{\\mf{p}}_i$ can be divided into two categories: one is to encode \\ti{the properties of each word itself}, and the other is to encode \\ti{its relevance with respect to the question}.\n\nFor the first category, in addition to word embedding $f_{emb}(p_i) = \\mf{E}(p_i) \\in \\R^d$, we also add some manual features which reflect the properties of word $p_i$ in its context, including its part-of-speech (POS) and named entity recognition (NER) tags and its (normalized) term frequency (TF): $f_{token}(p_i) = \\left(\\text{POS}(p_i), \\text{NER}(p_i), \\text{TF}(p_i)\\right)$. For POS and NER tags, we run off-the-shelf tools and convert it into a one-hot representation as the set of tags is small. The TF feature is real-valued number which measures how many times the word appears in the passage divided by the total number of words.\n\nFor the second category, we consider two types of representations:\n\\begin{itemize}\n  \\item\n  \\tf{Exact match}: $f_{exact\\_match}(p_i) = \\mathbb{I}(p_i \\in q) \\in \\R$. In practice, we use three simple binary features, indicating whether $p_i$ can be exactly matched to one question word in $q$, either in its original, lowercase or lemma form.\n  \\item\n  \\tf{Aligned question embeddings}: The exact match features encode the hard alignment between question words and passage words. Aligned question embeddings aim to encode a soft notion of alignment between words in the word embedding space, so that similar (but non-identical) words, e.g., \\textit{car} and \\textit{vehicle}, can be well aligned. Concretely, we use\n  \\begin{equation}\n      \\label{eq:aligned_question}\n    f_{align}(p_i) = \\sum_j{a_{i, j} \\mf{E}(q_j)}\n  \\end{equation}\n  where $a_{i, j}$ are the attention weights which capture the similarity between $p_i$ and each question words $q_j$ and $\\mf{E}(q_j) \\in \\R^d$ is the word embedding for each question word. $a_{i, j}$ is computed by the dot product between nonlinear mappings of word embeddings:\n  \\begin{equation}\n    \\label{eq:aligned_question_attention}\n    a_{i, j} = \\frac{\\exp\\left(\\text{MLP}(\\mf{E}(p_i))^{\\intercal} \\text{MLP}(\\mf{E}(q_{j}))\\right)}{\\sum_{j'}{\\exp\\left(\\text{MLP}(\\mf{E}(p_i)) ^{\\intercal} \\text{MLP}(\\mf{E}(q_{j'}))\\right)}},\n  \\end{equation} and $\\text{MLP}(\\mf{x}) = \\max(0, \\mf{W}_{\\text{MLP}}\\mf{x} + \\mf{b}_{\\text{MLP}})$ is a single dense layer with ReLU nonlinearity, where $\\mf{W}_{\\text{MLP}} \\in \\R^{d \\times d}$ and $\\mf{b}_{\\text{MLP}} \\in \\R^d$.\n\\end{itemize}\nFinally, we simply concatenate the four components and form the input representation:\n\\begin{equation}\n    \\tilde{\\mf{p}_i} = (f_{emb}(p_i), f_{token}(p_i), f_{exact\\_match}(p_i), f_{align}(p_i)) \\in \\R^{\\tilde{d}}\n\\end{equation}\n\n\\subsubsection*{Answer prediction}\nWe have vector representations for both the passage $\\mf{p}_1, \\mf{p}_2, \\ldots, \\mf{p}_{l_p} \\in \\R^h$ and the question $\\mf{q} \\in \\R^h$ and the goal is to predict the span that is most likely the correct answer. We employ the idea of attention mechanism again and train two separate classifiers, one is to predict the start position of the span while the other is to predict the end position. More specifically, we use a bilinear product to capture the similarity between $\\mf{p}_i$ and $\\mf{q}$:\n\\begin{eqnarray}\nP^{(\\text{start})}(i) & = & \\frac{\\exp\\left(\\mf{p}_i \\mf{W}^{(\\text{start})} \\mf{q}\\right)}{\\sum_{i'}\\exp\\left(\\mf{p}_{i'} \\mf{W}^{(\\text{start})} \\mf{q}\\right)} \\\\\nP^{(\\text{end})}(i) & = & \\frac{\\exp\\left(\\mf{p}_i \\mf{W}^{(\\text{end})} \\mf{q}\\right)}{\\sum_{i'}\\exp\\left(\\mf{p}_{i'} \\mf{W}^{(\\text{end})} \\mf{q}\\right)},\n\\end{eqnarray}\nwhere $\\mf{W}^{(\\text{start})}, \\mf{W}^{(\\text{end})} \\in \\R^{h \\times h}$ are additional parameters to be learned. This is slightly different from the formulation of attention as we don't need to take the weighted sum of all the vector representations. Instead, we use the normalized weights to make direct predictions. We use bilinear products because we find them to work well empirically.\n\n\\subsubsection*{Training and inference}\nThe final training objective is to minimize the cross-entropy loss:\n\\begin{equation}\n    \\mathcal{L} = - \\sum \\log{P^{(\\text{start})}(a_{\\text{start}})} - \\sum \\log{P^{(\\text{end})}(a_{\\text{end}})},\n\\end{equation}\nand all the parameters $\\Theta = \\Theta^{(p)}, \\Theta^{(q)}, \\mf{w}^{(q)}, \\mf{W}_{\\text{MLP}}, \\mf{b}_{\\text{MLP}}, \\mf{W}^{(\\text{start})}, \\mf{W}^{(\\text{end})}$ are optimized jointly with stochastic gradient methods.\\footnote{We exclude word embeddings here but it is also common to treat all or a subset of the word embeddings as parameters and fine-tune them during training.}\n\nDuring inference, we choose the span $p_i, \\ldots, p_{i'}$ such that $i \\leq i' \\leq i + max\\_len$ and $P^{(\\text{start})}(i) \\times P^{(\\text{end})}(i')$ is maximized. $max\\_len$ is a pre-defined constant (e.g., 15) which controls the maximum length of the answer.\n\n\\subsection{Extensions}\nIn the following, we give a few variants of the \\sys{Stanford Attentive Reader} for other types of reading comprehension problems.\nAll these models follow the same process of passage encoding and question encoding as described above, hence we have $\\mf{p}_1, \\mf{p}_2, \\ldots, \\mf{p}_{l_p} \\in \\R^h$ and $\\mf{q} \\in \\R^h$. We only discuss the answer prediction component and training objectives.\n\n\\paragraph{\\tf{Cloze style.}} Similarly, we can compute an attention function using a bilinear product of the question over all the words in the passage, and then compute an output vector $\\mf{o}$ which takes a weighted sum of all the paragraph representations:\n\\begin{eqnarray}\n    \\alpha_i & = & \\frac{\\exp\\left(\\mf{p}_i \\mf{W} \\mf{q}\\right)}{\\sum_{i'}\\exp\\left(\\mf{p}_{i'} \\mf{W} \\mf{q}\\right)} \\\\\n    \\mf{o} & = & \\sum_{i}{\\alpha_i \\mf{p}_i}   \\label{eqn:output_vector}\n\\end{eqnarray}\nThe output vector $\\mf{o}$ can be used to predict the missing word or entity:\n\\begin{equation}\n    P(Y = e \\mid p, q) = \\frac{\\exp(\\mf{W}^{(a)}_e \\mf{o})}{\\sum_{e' \\in \\mathcal{E}}\\exp\\left(\\mf{W}^{(a)}_{e'} \\mf{o}\\right)},\n\\end{equation}\nwhere $\\mathcal{E}$ denotes the candidate set of entities or words. It is straightforward to adopt a negative log-likelihood objective for training and choose $e \\in \\mathcal{E}$ which maximizes $\\mf{W}^{(a)}_{e} \\mf{o}$ during prediction. This model has been studied in our earlier paper \\cite{chen2016thorough} for the \\sys{CNN/Daily Mail} dataset and \\cite{onishi2016did} for the \\sys{Who-Did-What} dataset.\n\n\\paragraph{\\tf{Multiple choice.}} In this setting, $k$ hypothesized answers are given $\\mathcal{A} = \\{a_1, \\ldots, a_k\\}$ and we can encode each of them into a vector $\\mf{a}_i$ by applying a third BiLSTM, similar to our question encoding step. We can then compute the output vector $\\mf{o}$ as in Equation~\\ref{eqn:output_vector} and compare it with each hypothesized answer vector $\\mf{a}_i$ through another similarity function using a bilinear product:\n\\begin{equation}\n    P(Y = i \\mid p, q) = \\frac{\\exp(\\mf{a}_i \\mf{W}^{(a)} \\mf{o})}{\\sum_{i'=1, \\ldots, k}\\exp\\left(\\mf{a}_{i'}\\mf{W}^{(a)} \\mf{o}\\right)}\n\\end{equation}\nThe cross-entroy loss is also used for training. This model has been studied in \\newcite{lai2017race} for the \\sys{RACE} dataset.\n\n\\paragraph{\\tf{Free-form answer.}} For this type of problems, the answer isn't restricted to a single entity or a span in the passage and can take any sequence of words and the most common solution is to incorporate an LSTM sequence decoder into the current framework. In more detail, assume the answer string is $a = (a_1, a_2, \\ldots, a_{l_a})$ and a special ``end-of-sequence'' token $\\left<eos\\right>$ is added to the end of each answer. We can compute the output vector $\\mf{o}$ again as in Equation~\\ref{eqn:output_vector}. and the decoder generates a word at a time and hence the conditional probability can be decomposed as:\n\\begin{equation}\n    P(a \\mid p, q) =  P(a \\mid \\mf{o}) = \\prod_{j = 1}^{l_a}P(a_j \\mid a_{<j}, \\mf{o})\n\\end{equation}\n\n$P(a_j \\mid a_{<j}, \\mf{o})$ is parameterized as an LSTM which takes $\\mf{o}$ as the initial hidden vector, and $a_j$ is predicted based on the hidden vector $\\mf{h}_j$ for the full vocabulary $\\mathcal{V} \\cup \\{\\left<eos\\right>\\}$. The training objective is\n\\begin{equation}\n\\mathcal{L} = -\\log{P(a \\mid p, q)} = -\\log\\sum_{j = 1}^{l_a}P(a_j \\mid a_{<j}, \\mf{o})\n\\end{equation}\nFor prediction, one word is predicted at a time which maximizes $P(a_j \\mid a_{<j}, \\mf{o})$ and then fed into the next time step, until the token $\\left<eos\\right>$ is predicted. We are not going to elaborate more on it as they are standard components in sequence-to-sequence models \\cite{sutskever2014sequence}.\n\nThis class of models has been studied on the \\sys{MS MARCO}~\\cite{nguyen2016ms} and the \\sys{NarrativeQA}~\\cite{kovcisky2018narrativeqa} datasets. However, as the free-form answer reading comprehension problems are more complex and more difficult to evaluate, we think that these methods haven't been fully explored yet, compared to other types of problems. Lastly, we believe that a \\textit{copy mechanism} proposed for summarization tasks  \\cite{gu2016incorporating,see2017get}, which allows the decoder to choose either to copy a word from the source text or to generate a word from the vocabulary, would be highly useful for reading comprehension tasks as well, as answer words are still likely to appear in the paragraph or question. We will discuss one model with a copy mechanism in Section~\\ref{sec:coqa-models}.\n"
  },
  {
    "path": "chapters/rc_overview/discussions.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Reading Comprehension vs. Question Answering}\n\\label{sec:rc-qa-diff}\n\nThere is a close relationship between reading comprehension and question answering.  We can view reading comprehension as an instance of question answering because it is essentially a question answering problem over a short passage of text. Nevertheless, although reading comprehension and general question answering share many common characteristics in problem formulation, approaches and evaluation, we think they emphasize different thing as their final goals:\n\n\\begin{itemize}\n    \\item\n        The ultimate goal of question answering is to build computer systems which are able to automatically \\ti{answer questions} posed by humans, no matter what sort of resources they depend on. These resources can be structured knowledge bases, unstructured text collections (encyclopedias, dictionaries, newswire articles and general Web documents), semi-structured tables or even other modalities. Towards the better performance of QA systems, a lot of efforts have been put into (1) how to search and identify relevant resources, (2) how to integrate answers from different pieces of information, or even (3) to study what types of questions humans usually ask in the real world.\n    \\item\n        However, reading comprehension puts more emphasis on \\ti{text understanding} with answering questions regarded as a way to measure language understanding. Therefore it requires a deep understanding of the given passage in order to answer the question. Due to this key difference, early works in this field mostly focused on fictional stories ~\\cite{lehnert1977process} (later extended to Wikipedia or Web documents), so all the information to answer comprehension questions comes from the passage itself instead of any world knowledge. The questions are also specifically devised to test different aspects of text comprehension. This distinction is akin to what questions people usually ask on search engines versus what sorts of questions are usually posed in human reading comprehension tests.\n\\end{itemize}\n\nSimilarly, early work \\cite{mitchell2009populating} used the terms \\tf{micro-reading} and \\tf{macro-reading} to differentiate these two scenarios. Micro-reading focuses on reading a single text document and aims to extract the full information content of that document (similar to our reading comprehension setting), while macro-reading takes a large text collection (e.g., the Web) as input and extracts a large collection of facts expressed in the text, without requiring that every single fact is extracted. Macro-reading can effectively leverage the \\ti{redundancy} of information across documents by focusing on analyzing simple wordings of the fact in the text, while micro-reading has to investigate deeper level of language understanding.\n\nThis thesis mostly focuses on reading comprehension. In Chapter~\\ref{chapter:openqa}, we will come back to more general question answering problems, discuss its related work and also demonstrate that reading comprehension can be also helpful in building question answering systems.\n\n\\section{Datasets and Models}\n\\label{sec:rc-drive}\n\n\nAs seen in Section~\\ref{sec:deep-learning-era}, the recent success of reading comprehension has been mainly driven by two key components: \\ti{large-scale reading comprehension datasets} and \\ti{end-to-end neural reading comprehension models}. They work together to advance the field and push the boundaries of building better reading comprehension systems:\n\n\\begin{description}\n\\item\nOn the one hand, the creation of large-scale reading comprehension datasets has made it possible to train neural models, while demonstrating their competitiveness over symbolic NLP systems. The availability of these datasets further attracted a lot of attention in our research community and inspired a series of modeling innovations. Tremendous progress has been made thanks to all these efforts.\n\\item\nOn the other hand, understanding the performance of existing models further helps identify the limitations of existing datasets. This motivates us to seek better ways to construct more challenging datasets, towards the ultimate goal of machine comprehension of text.\n\\end{description}\n\n\n\\begin{figure}[!t]\n    \\center\n    \\includegraphics[scale=1.0]{img/timeline.pdf}\n    \\longcaption{The recent development of datasets and models in neural reading comprehension}{\\label{fig:timeline}The recent development of datasets (black) and models (blue) in neural reading comprehension. For the timeline, we use the date that the corresponding papers were published, except \\sys{BERT}~\\cite{devlin2018bert}.}\n\\end{figure}\n\nFigure~\\ref{fig:timeline} shows a timeline of the recent development of key datasets and models since 2016. As is seen, although it has been only 3 years, the field has been moving strikingly fast. The innovations in building better datasets and more effective models have occurred alternately and both contributed to the development of the field. In the future, we believe it will be equally important to continue to develop both components.\n\nIn the next chapter, we will mainly focus on the modeling aspect, using the two representative datasets that we described earlier: \\sys{CNN/Daily Mail} and \\sys{SQuAD}. In Chapter~\\ref{chapter:rc-future}, we will discuss more about the advances and future work, for both datasets and models.\n"
  },
  {
    "path": "chapters/rc_overview/history.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{History}\n\\label{sec:rc-history}\n\n\\subsection{Early Systems}\nThe history of building automated reading comprehension systems dates back to over forty years ago. In the 1970s, researchers already recognized the importance of reading comprehension as an appropriate way of testing the language understanding abilities of computer programs.\n\n% \\red{TODO: want to cite \\cite{charniak1972toward} but I don't know much of its context.}\n\nOne of the most notable early works is the \\sys{QUALM} system detailed in \\newcite{lehnert1977process}. Built on top of the framework of scripts and plans as devices for modeling human story comprehension \\cite{schank1977scripts}, \\newcite{lehnert1977process} devised a theory of question answering and focused on pragmatic issues and the importance of the context of the story in responding to questions. This early work set a strong vision for language understanding, but the actual systems built at that time were very small and limited to hand-coded scripts, and difficult to generalize to broader domains.\n\nDue to the complexity of the problem, this line of research was mostly neglected in the 1980s and 1990s.\\footnote{There has been a large body of work in story comprehension developed within the psychology community, see \\cite{kintsch1998comprehension}.} In the late 1990s, there was some small revival of interest, following the creation of a reading comprehension dataset by \\newcite{hirschman1999deep} and a subsequent Workshop on Reading Comprehension Tests as Evaluation for Computer-based Understanding Systems at ANLP/NAACL 2000. The dataset consists of 60 stories for development and 60 stories for testing of 3rd to 6th grade material, followed by short-answer \\ti{who}, \\ti{what}, \\ti{when}, \\ti{where} and \\ti{why} questions. It only requires systems to return a sentence which contains the right answer. The systems developed at this stage were mostly rule-based bag-of-words approaches with shallow linguistic processing such as stemming, semantic class identification and pronoun resolution in the \\sys{Deep Read} system \\cite{hirschman1999deep}, or manually generated rules based on lexical and semantic correspondence in the \\sys{Quarc} system \\cite{riloff2000rule} or their combinations \\cite{charniak2000reading}. These systems achieved $30\\%$--$40\\%$ accuracy on retrieving the correct sentence.\n\n% {\\red{TODO: add a sentence about TREC QA?}}\n\n\\subsection{Machine Learning Approaches}\n\\label{sec:ml-approaches}\n\n\\afterpage{\n\\LTcapwidth=\\textwidth\n\\begin{longtable}{l | p{13.5cm}}\n\\toprule\n\\text{(a)} & \\tf{CNN/Daily Mail} (cloze style) \\\\\n& \\tf{passage}: {\\small ( @entity4 ) if you feel a ripple in the force today , it may be the news that the official @entity6 is getting its first gay character . according to the sci-fi website @entity9 , the upcoming novel `` @entity11 '' will feature a capable but flawed @entity13 official named @entity14 who `` also happens to be a lesbian . '' the character is the first gay figure in the official @entity6 -- the movies , television shows , comics and books approved by @entity6 franchise owner @entity22 -- according to @entity24 , editor of `` @entity6 '' books at @entity28 imprint @entity26 .} \\\\\n& \\tf{question}: {\\small characters in `` \\underline{\\hspace{1cm}} '' movies have gradually become more diverse} \\\\\n& \\tf{answer}: {\\small @entity6} \\\\\n\\midrule\n\\text{(b)} & \\tf{MCTest} (multiple choice) \\\\\n& \\tf{passage}: {\\small Once upon a time, there was a cowgirl named Clementine. Orange was her favorite color. Her favorite food was the strawberry. She really liked her Blackberry phone, which allowed her to call her friends and family when out on the range. One day Clementine thought she needed a new pair of boots, so she went to the mall. Before Clementine went inside the mall, she smoked a cigarette. Then she got a new pair of boots. She couldn't choose between brown and red. Finally she chose red, which the seller really liked. Once she got home, she found that her red boots didn't match her blue cowgirl clothes, so she knew she needed to return them. She traded them for a brown pair. While she was there, she also bought a pretzel from Auntie Anne's.} \\\\\n&\\tf{question}: {\\small What did the cowgirl do before buying new boots?} \\\\\n&\\tf{hypothesized answers}: {\\small A. She ate an orange B. She ate a strawberry C. She called her friend D. She smoked a cigarette} \\\\\n&\\tf{answer}: {\\small D. She smoked a cigarette} \\\\\n\\midrule\n\\text{(c)} &\\tf{SQuAD} (span prediction) \\\\\n&\\tf{passage}: {\\small Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion \\hl{Denver Broncos} defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50.}  \\\\\n&\\tf{question}: {\\small Which NFL team won Super Bowl 50?} \\\\\n&\\tf{answer}: {\\small Denver Broncos} \\\\\n\\midrule\n\\text{(d)} &\\tf{NarrativeQA} (free-form text) \\\\\n&\\tf{passage}: {\\small \\ldots In the eyes of the city, they are now considered frauds. Five years later, Ray owns an occult bookstore and works as an unpopular children s entertainer with Winston; Egon has returned to Columbia University to conduct experiments into human emotion; and Peter hosts a pseudo-psychic television show. Peter's former girlfriend Dana Barrett has had a son, Oscar, with a violinist whom she married then divorced when he received an offer to join the London Symphony Orchestra.\\ldots }  \\\\\n&\\tf{question}: {\\small How is Oscar related to Dana?} \\\\\n&\\tf{answer}: {\\small He is her son} \\\\\n\\bottomrule\n\\longcaption{Examples from representative reading comprehension datasets}{\\label{tab:rc-examples} A few examples from representative reading comprehension datasets: (a) \\sys{CNN/Daily Mail}~\\cite{hermann2015teaching}, (b) \\sys{MCTest}~\\cite{richardson2013mctest}, (c) \\sys{SQuAD}~\\cite{rajpurkar2016squad} and (d) \\sys{NarrativeQA}~\\cite{kovcisky2018narrativeqa}.}\n\\end{longtable}\n}\n\nBetween 2013 and 2015, there were remarkable efforts of formulating reading comprehension as a \\ti{supervised learning} problem: researchers collected human-labeled training examples in the form of (passage, question, answer) triples, with the hope that we can train statistical models which learn to map a passage and question pair into their corresponding answer: $f: (\\text{passage}, \\text{question}) \\longrightarrow \\text{answer}.$\n\nTwo notable datasets during this period are \\sys{MCTest}~\\cite{richardson2013mctest} and \\sys{ProcessBank}~\\cite{berant2014modeling}. \\sys{MCTest} collects 660 fictional stories, with 4 multiple choice questions per story (each question comes with 4 hypothetical answers and one of them is correct) (Table~\\ref{tab:rc-examples} (b)). \\sys{ProcessBank} is designed to answer binary-choice questions in a paragraph describing a biological process and requires an understanding of the relations between entities and events in the process. The dataset comprises 585 questions spread over the 200 paragraphs.\n\nIn the original \\sys{MCTest} paper, \\newcite{richardson2013mctest} proposed several rule-based baselines without leveraging any training data. One is a heuristic sliding window approach, which measures the weighted word overlap/distance information between words in the question, the answer and the sliding window; the other is to run an off-the-shelf textual entailment system by converting each question-answer pair into a statement. This dataset later inspired a strand of machine learning models \\cite{sachan2015learning,narasimhan2015machine,wang2015machine}. These models were mostly built on top of a simple max-margin learning framework with a rich set of hand-engineered linguistic features, including syntactic dependencies, semantic frames, coreference resolution, discourse relations and word embeddings. The performance was improved modestly from 63\\% to around 70\\% on the \\sys{MC500} portion. On the \\sys{ProcessBank} dataset, \\newcite{berant2014modeling} proposed a statistical model which learns to predict the process structure first and then maps the question to formal queries that can be executed against the structure. Similarly, the model incorporates a large set of manual features,\\footnote{See \\href{https://nlp.stanford.edu/pubs/berant-srikumar-manning-emnlp14-supp.pdf}{https://nlp.stanford.edu/pubs/berant-srikumar-manning-emnlp14-supp.pdf}.} and eventually obtains 66.7\\% accuracy on the binary classification task.\n\nThese machine learning models have achieved modest progress compared to earlier rule-based heuristic methods. However, their improvements are still rather limited and their weaknesses are summarized as follows:\n\\begin{itemize}\n    \\item\n        These models relied heavily on existing linguistic tools such as dependency parsers and semantic role labeling (SRL) systems. However, these linguistic representation tasks are far from solved and off-the-shelf tools are often trained from one single domain (e.g., newswire articles) and suffer from generalization problems in practical use. Therefore, leveraging existing linguistic annotations as features sometimes adds noise in these feature-based machine learning models and the situation gets worse for higher level annotations (e.g., discourse relations vs. part-of-speech tagging).\n    \\item\n        Simulating human-level comprehension is an elusive challenge and it is always the case that it is difficult to construct effective features from current linguistic representations. For example, for the third question in Figure~\\ref{fig:mctest-example}: \\ti{How many friends does Alyssa have in this story?}, it is impossible to construct an effective feature when the evidence is spread over the passage.\n    \\item\n        Although it is inspiring that we can train models from human-labeled reading comprehension examples, these datasets are still too small to support expressive statistical models. For example, the English Penn Treebank dataset for training dependency parsers consists of 39,832 examples, while in \\sys{MCTest}, only 1,480 examples are used for training --- let alone reading comprehension which, as a comprehensive language understanding task, is more complex and requires different reasoning capabilities.\n\\end{itemize}\n\n\\subsection{A Resurgence: The Deep Learning Era}\n\\label{sec:deep-learning-era}\n\nA turning point for this field came in 2015. The DeepMind researchers \\newcite{hermann2015teaching} proposed a novel and cheap solution for creating large-scale supervised training data for learning reading comprehension models. They also proposed a neural network model --- an attention-based LSTM model named \\sys{The Attentive Reader} --- and demonstrated that it outperformed symbolic NLP approaches by a large margin. In their experiments, the \\sys{Attentive Reader} achieved 63.8\\% accuracy while symbolic NLP systems obtained 50.9\\% at most on the \\sys{CNN} dataset.  The idea of the data creation is as follows: CNN and Daily Mail are accompanied by a number of bullet points, summarizing aspects of the information contained in the article. They take a news article as the passage and convert one of its bullet points as a cloze style question by replacing one entity at at time with a placeholder, and the answer is this replaced entity. In order to ensure that systems approaching this task need to genuinely understand the passage, rather than using world knowledge or a language model to answer questions, they run entity recognition and coreference resolution systems and replace all the entity mentions in each coreference chain by an abstract entity marker e.g., \\ti{@entity6} (see an example in Table~\\ref{tab:rc-examples} (a)). As a result, nearly 1 million data examples were collected at almost no cost.\n\nTaking a step further, our work \\cite{chen2016thorough} investigated this first-ever large reading comprehension dataset and demonstrated that a simple, carefully designed neural network model (Section~\\ref{sec:sar}) is able to push the performance to 72.4\\% on the \\sys{CNN} dataset, another 8.6\\% absolute improvement. More importantly, we justified that the neural network models are better at recognizing lexical matches and paraphrases compared to conventional feature-based classifiers. However,  although this semi-synthetic dataset provides a promising avenue for training effective statistical models, we concluded that the dataset appears to be noisy due to its method of data creation and coreference errors and is limited for driving further progress.\n\nTo address these limitations, \\newcite{rajpurkar2016squad} collected a new dataset named \\sys{the Stanford Question Answering Dataset (SQuAD)}. The dataset contains 107,785 question-answer pairs on 536 Wikipedia articles and the questions were posed by crowdworkers, and the answer to every question is a span of text from the corresponding reading passage (Table~\\ref{tab:rc-examples} (c)). \\sys{SQuAD} was the first large-scale reading comprehension dataset with natural questions. Thanks to its high quality and reliable automatic evaluation, this dataset has spurred tremendous interest in the NLP community and become the central benchmark in this field. It in turn inspired a wide array of new reading comprehension models \\cite{wang2017machine,seo2017bidirectional,chen2017reading,wang2017gated,yu2018qanet} and the progress has been rapid --- as of Oct 2018, the best-performing single system achieved an F1 score of 91.8\\% \\cite{devlin2018bert} which already exceeds the estimated human performance of 91.2\\%, while a feature-based classifier built by the original authors in 2016 only obtained an F1 of 51.0\\%, as shown in Figure~\\ref{fig:squad-progress}.\n\n\\begin{figure}[!t]\n\\center\n\\includegraphics[scale=0.8]{img/squad_progress.png}\n\\longcaption{The progress on \\sys{SQuAD} 1.1}{\\label{fig:squad-progress}The progress on \\sys{SQuAD} 1.1 (single model) since the dataset was released in June 2016. The data points are taken from the leaderboard at \\href{http://stanford-qa.com/}{http://stanford-qa.com/}.}\n\\end{figure}\n\nAll the current top-performing systems on \\sys{SQuAD} are built on \\ti{end-to-end neural networks}, or \\ti{deep learning} models. These models usually start off from the idea of representing every single word in the passage and question as a dense vector (e.g., 300 dimensions), passing through several modeling or interaction layers, and finally making predictions. All the parameters can be optimized jointly using the gradient descent algorithm or its variants. This class of models can be referred to as \\ti{neural reading comprehension} and we will describe it in detail in Chapter~\\ref{chapter:rc-models}. Differing from feature-based classifiers, neural reading comprehension models have several great advantages:\n\\begin{itemize}\n    \\item\n        They don't rely on any downstream linguistic features (e.g., dependency parsing or coreference resolution) and all the features are learned on their own in one unified end-to-end framework. This can avoid noise in linguistic annotations while also providing great flexibility in the space of useful features.\n    \\item\n        Conventional symbolic NLP systems suffer from one severe problem: features are usually very sparse and generalize poorly. For example, to answer a question ``\\ti{How many individual libraries \\tf{make up} the main school library?}'' from a passage ``\\ldots\\quad\\quad\\ti{Harvard Library, which is the world's largest academic and private library system, \\tf{comprising} 79 individual libraries with over 18 million volumes.}'', a system has to learn the correspondence between \\ti{comprising} and \\ti{make up} based on indicator features such as:\n        $$\\text{pw}_i = \\text{comprising} \\wedge \\text{qw}_{j} = \\text{make} \\wedge \\text{qw}_{j + 1} = \\text{up}.$$\n        There is insufficient data to correctly weight most such features. It is a common problem in all non-neural NLP models. Making use of low-dimensional, dense word embeddings can effectively alleviate sparsity by sharing statistical strength between similar words.\n    \\item\n        They are relieved from the labor of constructing a large set of manual features. Therefore, neural models are conceptually simpler and the focus can move to the design of neural architectures instead. Thanks to the development of modern deep learning frameworks such as \\sys{Tensorflow} and \\sys{PyTorch}, great progress has been made, and now it is fast and easy to develop new models.\n\\end{itemize}\n\n% \\red{TODO: add ``power of end-to-end optimization for a final goal''}\n% \\red{TODO: add ``more effective utilization of context in interpretation for WSD etc. This is what LSTMs give you!''}\n\nThere is no doubt that achieving human-performance on \\sys{SQuAD} is incredible and arguably one of the biggest results we have seen in the NLP community in the past few years. Nevertheless, solving the \\sys{SQuAD} task isn't equivalent to solving machine reading comprehension. We need to acknowledge that SQuAD is restricted in that questions must be answered using a single span in the passage and most SQuAD examples are fairly simple and don't really need complex reasoning.\n\nThe field has been further evolving. Following the theme of creating large-scale and more challenging reading comprehension datasets, a multitude of datasets have been collected recently: \\sys{TriviaQA} \\cite{joshi2017triviaqa}, \\sys{RACE} \\cite{lai2017race}, \\sys{QAngaroo} \\cite{welbl2018constructing}, \\sys{NarrativeQA} \\cite{kovcisky2018narrativeqa}, \\sys{MultiRC} \\cite{khashabi2018looking}, SQuAD 2.0~\\cite{rajpurkar2018know}, \\sys{HotpotQA}~\\cite{yang2018hotpotqa} and many others. These datasets were collected from a variety of sources (Wikipedia, newswire articles, fictional stories or other Web resources) and constructed in very different ways and they aim to tackle many challenges that haven't been addressed before --- questions which are curated independent of the passages, questions which require multiple sentences or even multiple documents to answer, questions based on long documents like a full book, or questions which are not answerable from the passage. At the time of this writing, most of these datasets have not been solved yet and there remains a large gap between state-of-the-art methods and human performance levels. Reading comprehension has become one of the most active fields in NLP today and there are still many open questions to solve. We will discuss the recent development of reading comprehension datasets in more detail in Section~\\ref{sec:future-datasets}.\n"
  },
  {
    "path": "chapters/rc_overview/intro.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\epigraph{When a person understands a story, he can demonstrate his understanding by answering questions about the story. Since questions can be devised to query any aspect of text comprehension, the ability to answer questions is the strongest possible demonstration of understanding. If a computer is said to understand a story, we must demand of the computer the same demonstration of understanding that we require of people. Until such demands are met, we have no way of evaluating text understanding programs.}{Wendy Lehnert, 1977}\n\n\nIn this chapter, we aim to provide readers with an overview of reading comprehension. We begin with the history of reading comprehension (Section~\\ref{sec:rc-history}), from the early systems developed in the 1970s, to the attempts to build machine learning models for this task, to the more recent resurgence of neural (deep learning) approaches. This field has been completely reshaped by neural reading comprehension, and the progress is very exciting.\n\nWe then formally define the reading comprehension task as a supervised learning problem in Section~\\ref{sec:task-definition} and describe four different categories based on the answer type. We end by discussing their evaluation metrics.\n\nNext we discuss briefly how reading comprehension differs from question answering, especially in their final goals (Section~\\ref{sec:rc-qa-diff}). Finally, we discuss that how the interplay of large-scale datasets and neural models contributes to the development of modern reading comprehension in Section~\\ref{sec:rc-drive}.\n\n% This chapter is going to cover the following topics:\n% \\begin{itemize}\n% \\item\n%   Task definition: formalize 4 different types of RC tasks and their evaluation metrics.\n% \\item\n%   Disucss how RC and QA are different\n% \\item\n%   Recap the history of reading comprehension: early systems, machine learning models and the deep learning era.\n% \\item\n%   Finally,\n% \\end{itemize}\n"
  },
  {
    "path": "chapters/rc_overview/task.tex",
    "content": "%!TEX root = ../../thesis.tex\n\n\\section{Task Definition}\n\\label{sec:task-definition}\n\n\\subsection{Problem Formulation}\n\nThe task of reading comprehension can be formulated as a supervised learning problem: given a collection of training examples $\\{({p}_i, {q}_i, {a}_i)\\}_{i=1}^{n}$, the goal is to learn a predictor $f$ which takes a passage of text ${p}$ and a corresponding question ${q}$ as inputs and gives the answer ${a}$ as output.\n\\begin{equation}\n  f: ({p}, {q}) \\longrightarrow {a}\n\\end{equation}\n\nLet ${p} = (p_1, p_2, \\ldots, p_{l_p})$ and ${q} = (q_1, q_2, \\ldots, q_{l_q})$\\footnote{A preprocessing step of tokenization is usually required on most current reading comprehension datasets.} where $l_p$ and $l_q$ denote the length of the passage and the question, $p_i \\in \\mathcal{V}$ for $i = 1, \\ldots, l_p$ and $q_i \\in \\mathcal{V}$ for $i = 1, \\ldots, l_q$ where $\\mathcal{V}$ is a pre-defined vocabulary. Here we only consider the passage ${p}$ as a short paragraph represented as a sequence of $l_p$ words. It is straightforward to extend it to a multi-paragraph setting \\cite{clark2018simple} where ${p}$ is a set of paragraphs or decompose it into smaller linguistic units such as sentences.\\footnote{There have been some efforts (e.g., \\cite{xie2017constituent}) which model the paragraph as a sequence of sentences, but there is no clear evidence that it outperforms methods that treat the whole paragraph as a long sequence at this point.}\n\nDepending on the answer type, the answer ${a}$ can take very different forms. Generally, we can divide existing reading comprehension tasks into four categories:\n\n\\begin{description}\n\\item[Cloze style.] The question contains a placeholder. For instance,\n\\begin{displayquote}\nTottenham manager Juande Ramos has hinted he will allow \\underline{\\hspace{1cm}} to leave if the Bulgaria striker makes it clear he is unhappy.\n\\end{displayquote}\nIn these tasks, the systems must guess which word or entity completes the sentence (question), based on the passage, and the answer ${a}$ is either chosen from a pre-defined set of choices $\\mathcal{A}$ or the full vocabulary $\\mathcal{V}$. For example, in the \\sys{Who-did-What} dataset \\cite{onishi2016did}, ${a}$ must be one of the person named entities in the passage and $|\\mathcal{A}| = 3.5$ on average.\n\n\\item[Multiple choice.] In this category, the correct answer is chosen from $k$ hypothesized answers (e.g., $k = 4$):\n$$\\mathcal{A} = \\{{a}_1, \\ldots, {a}_k\\}  \\text{ where } {a}_{k} = (a_{k, 1}, a_{k, 2} \\ldots, a_{k, l_{a, k}}), a_{k, i} \\in \\mathcal{V},$$\ncan be a word, a phrase or a sentence. One of the hypothesized answers is correct and thus ${a}$ must be chosen from $\\{{a}_1, \\ldots, {a}_k\\}$.\n\n\n\\item[Span prediction.] This category is also referred to as \\ti{extractive question answering} and the answer ${a}$ must be a single span in the passage. Therefore, ${a}$ can be represented as $(a_{start}, a_{end})$ where $1 \\leq a_{start} \\leq a_{end} \\leq l_p$, and the answer string corresponds to $p_{a_{start}}, \\ldots, p_{a_{end}}.$\n\n\\item[Free-form answer.] The last category allows the answer to be any free-text form (i.e., a word sequence of arbitrary length), formally, ${a} \\in \\mathcal{V}^*$.\n\\end{description}\n\nTable~\\ref{tab:rc-examples} gives an example in each of the categories from four representative datasets: \\sys{CNN/Daily Mail}~\\cite{hermann2015teaching} (cloze style), \\sys{MCTest}~\\cite{richardson2013mctest} (multiple choice), \\sys{SQuAD}~\\cite{rajpurkar2016squad} (span prediction) and \\sys{NarrativeQA}~\\cite{kovcisky2018narrativeqa} (free-form answer).\n\n\n\\subsection{Evaluation}\n\\label{sec:evaluation}\nWe have formally defined the four different categories of reading comprehension tasks, next we discuss their evaluation metrics.\n\nFor \\tf{multiple choice} or \\tf{cloze style} tasks, it is quite straightforward to measure the accuracy: the percentage of questions for which systems give the exactly correct answer, as the answer is chosen from a small set of hypothesized answers.\n\nFor \\tf{span prediction} tasks, we need to compare the predicted answer string to the ground truth. Typically, we use the two evaluation metrics proposed in \\newcite{rajpurkar2016squad}, which measure both exact match and partial scores:\n\n\\begin{itemize}\n    \\item\n        \\tf{Exact match (EM)} assigns a full credit $1.0$ if the predicted answer is equal to the gold answer and $0.0$ otherwise.\n    \\item\n        \\tf{F1 score} computes the average word overlap between predicted and gold answers. The prediction and the gold answer are treated as a bag of tokens and a token-level F1 score is calculated as: $$ \\text{F1} = \\frac{2 \\times \\text{Precision} \\times \\text{Recall}}{\\text{Precision} + \\text{Recall}}. $$\n\\end{itemize}\n\n\nFollowing \\newcite{rajpurkar2016squad}, all punctuation is ignored in the evaluation and for English, articles  \\ti{a}, \\ti{an}, and \\ti{the} are also ignored.\n\nTo make the evaluation more reliable, it is also common to collect multiple gold answers to each question. Therefore, the exact match score is only required to match any of the gold answers while the F1 score is to compute the maximum over all of the gold answers and then averaged over all of the questions.\n\nLastly, for the \\tf{free-form answer} reading comprehension tasks, there isn't a consensus yet on what is the most ideal evaluation metric. A common way is to use standard evaluation metrics in natural language generation (NLG) tasks such as machine translation or summarization, including BLEU \\cite{papineni2002bleu}, Meteor \\cite{banerjee2005meteor} and ROUGE \\cite{lin2004rouge}.\n"
  },
  {
    "path": "conclude.tex",
    "content": "%!TEX root = thesis.tex\n\n\nIn this dissertation, we gave readers a thorough overview of neural reading comprehension: the foundations (\\sys{Part I}) and the applications (\\sys{Part II}), as well as how we contributed to the development of this field since it emerged in late 2015.\n\nIn Chapter~\\ref{chapter:rc-overview}, we walked through the history of reading comprehension, which dates back to the 1970s. At the time, researchers already recognized its importance as a proper way of testing the language understanding abilities of computer programs. However, it was not until the 2010s that, reading comprehension started to be formulated as a supervised learning problem by collecting human-labeled training examples in the form of (passage, question, answer) triples. Since 2015, the field has been completed reshaped, by the creation of large-scale supervised datasets, and the development of neural reading comprehension models.  Although it has been only 3 years so far, the field has been moving strikingly fast. Innovations in building better datasets and more effective models have occurred alternately, and both contributed to the development of the field. We also formally defined the task of reading comprehension, and described the four most common types of problems: \\ti{cloze style}, \\ti{multiple choice}, \\ti{span prediction} and \\ti{free-form answers} and their evaluation metrics.\n\n\nIn Chapter~\\ref{chapter:rc-models}, we covered all the elements of modern neural reading comprehension models. We introduced the \\sys{Stanford Attentive Reader}, which we first proposed for the \\sys{CNN/Daily Mail} cloze style task, and is one of the earliest neural reading comprehension models in this field. Our model has been studied extensively on other cloze style and multiple choice tasks. We later adapted it to the \\sys{SQuAD} dataset and achieved what was then state-of-the-art performance.  Compared to conventional feature-based models, this model doesn't rely on any downstream linguistic features and all the parameters are jointly optimized together. Through empirical experiments and a careful hand-analysis, we concluded that neural models are more powerful at recognizing lexical matches and paraphrases. We also discussed recent advances in developing neural reading comprehension models, including better \\ti{word representations}, \\ti{attention mechanisms}, \\ti{alternatives to LSTMs}, and other advances such as training objectives and data augmentation.\n\nIn Chapter~\\ref{chapter:rc-future}, we discussed future work and open questions in this field. We examined error cases on \\sys{SQuAD} (for both our model and the state-of-the-art model which surpasses the human performance). We concluded that these models have been doing very sophisticated matching of text but they still have difficulty understanding the inherent structure between entities and the events expressed in the text. We later discussed future work in both models and datasets. For models, we argued that besides \\ti{accuracy}, there are other important aspects which have been overlooked that we will need to work on in the future, including \\ti{speed and scalability}, \\ti{robustness}, and \\ti{interpretability}. We also believe that future models will need more structures and modules to solve more difficult reading comprehension problems. For datasets, we discussed more recent datasets developed after \\sys{SQuAD} --- these datasets either require more complex reasoning across sentences or documents, or need to handle longer documents, or need to generate free-form answers instead of extracting a single span, or predict when there is no answer in the passage. Lastly, we examined several questions we think are important to the future of neural reading comprehension.\n\nIn \\sys{Part II}, the key questions we wanted to answer are: Is reading comprehension only a task of measuring language understanding? If we can build high-performing reading comprehension systems which can answer comprehension questions over a short passage of text, can it enable useful applications?\n\nIn Chapter~\\ref{chapter:openqa}, we showed that we can combine information retrieval techniques and neural reading comprehension\nmodels to build an open-domain question-answering system: answering general questions over a large encyclopedia or the Web. In particular, we implemented this idea in the \\sys{DrQA} project, a large-scale, factoid question answering system over English Wikipedia. We demonstrated the feasibility of doing this by evaluating the system on multiple question answering benchmarks. We also proposed a procedure to automatically create additional distantly-supervised training examples from other question answering resources and demonstrated the effectiveness of this approach. We hope that our work takes the first step in this research direction and this new paradigm of combining information retrieval and neural reading comprehension will eventually lead to a new generation of open-domain question answering systems.\n\nIn Chapter~\\ref{chapter:coqa}, we addressed the conversational question answering problem, where a computer system needs to understand a text passage and answer a series of questions that appear in a conversation. To approach this, we built \\sys{CoQA}: a Conversational Question Answering challenge for measuring the ability of machines to participate in a question-answering style conversation. Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. We also built several competitive baselines for this new task, based on conversational and reading comprehension models. We believe that building such systems will play a crucial role in our future conversational AI systems.\n\nAll together, we are really excited about the progress that has been made in this field for the past 3 years and have been glad to be able to contribute to this field. At the same time, we also deeply believe there is still a long way to go towards genuine human-level reading comprehension, and we are still facing enormous challenges and a lot of open questions that we will need to address in the future. One key challenge is that we still don't have good ways to approach deeper levels of reading comprehension --- those questions which require understanding the reasoning and implications of the text. Often this occurs with \\ti{how} or \\ti{why} questions, such as \\ti{In the story, why is Cynthia upset with her mother?}, \\ti{How does John attempt to make up for his original mistake?} In the future, we will have to address the underlying science of what is being discussed, rather than just answering from text matching, to achieve this level of reading comprehension.\n\nWe also hope to encourage more researchers to work on the applications, or apply neural reading comprehension to new domains or tasks. We believe that it will lead us towards building better question answering and conversational agents and hope to see these ideas implemented and developed in industry applications.\n"
  },
  {
    "path": "fitch.sty",
    "content": "% Macros for Fitch-style natural deduction. \n% Author: Peter Selinger, University of Ottawa\n% Created: Jan 14, 2002\n% Modified: Feb 8, 2005\n% Version: 0.5\n% Copyright: (C) 2002-2005 Peter Selinger\n% Filename: fitch.sty\n% Documentation: fitchdoc.tex\n% URL: http://quasar.mathstat.uottawa.ca/~selinger/fitch/\n\n% License:\n%\n% This program is free software; you can redistribute it and/or modify\n% it under the terms of the GNU General Public License as published by\n% the Free Software Foundation; either version 2, or (at your option)\n% any later version.\n%\n% This program is distributed in the hope that it will be useful, but\n% WITHOUT ANY WARRANTY; without even the implied warranty of\n% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU\n% General Public License for more details.\n%\n% You should have received a copy of the GNU General Public License\n% along with this program; if not, write to the Free Software Foundation, \n% Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.\n\n% USAGE EXAMPLE:\n% \n% The following is a simple example illustrating the usage of this\n% package.  For detailed instructions and additional functionality, see\n% the user guide, which can be found in the file fitchdoc.tex.\n% \n% \\[\n% \\begin{nd}\n%   \\hypo{1}  {P\\vee Q}   \n%   \\hypo{2}  {\\neg Q}                         \n%   \\open                              \n%   \\hypo{3a} {P}\n%   \\have{3b} {P}        \\r{3a}\n%   \\close                   \n%   \\open\n%   \\hypo{4a} {Q}\n%   \\have{4b} {\\neg Q}   \\r{2}\n%   \\have{4c} {\\bot}     \\ne{4a,4b}\n%   \\have{4d} {P}        \\be{4c}\n%   \\close                             \n%   \\have{5}  {P}        \\oe{1,3a-3b,4a-4d}                 \n% \\end{nd}\n% \\]\n\n{\\chardef\\x=\\catcode`\\*\n\\catcode`\\*=11\n\\global\\let\\nd*astcode\\x}\n\\catcode`\\*=11\n\n% References\n\n\\newcount\\nd*ctr\n\\def\\nd*render{\\expandafter\\ifx\\expandafter\\nd*x\\nd*base\\nd*x\\the\\nd*ctr\\else\\nd*base\\ifnum\\nd*ctr<0\\the\\nd*ctr\\else\\ifnum\\nd*ctr>0+\\the\\nd*ctr\\fi\\fi\\fi}\n\\expandafter\\def\\csname nd*-\\endcsname{}\n\n\\def\\nd*num#1{\\nd*numo{\\nd*render}{#1}\\global\\advance\\nd*ctr1}\n\\def\\nd*numopt#1#2{\\nd*numo{$#1$}{#2}}\n\\def\\nd*numo#1#2{\\edef\\x{#1}\\mbox{$\\x$}\\expandafter\\global\\expandafter\\let\\csname nd*-#2\\endcsname\\x}\n\\def\\nd*ref#1{\\expandafter\\let\\expandafter\\x\\csname nd*-#1\\endcsname\\ifx\\x\\relax%\n  \\errmessage{Undefined natdeduction reference: #1}\\else\\mbox{$\\x$}\\fi}\n\\def\\nd*noop{}\n\\def\\nd*set#1#2{\\ifx\\relax#1\\nd*noop\\else\\global\\def\\nd*base{#1}\\fi\\ifx\\relax#2\\relax\\else\\global\\nd*ctr=#2\\fi}\n\\def\\nd*reset{\\nd*set{}{1}}\n\\def\\nd*refa#1{\\nd*ref{#1}}\n\\def\\nd*aux#1#2{\\ifx#2-\\nd*refa{#1}--\\def\\nd*c{\\nd*aux{}}%\n  \\else\\ifx#2,\\nd*refa{#1}, \\def\\nd*c{\\nd*aux{}}%\n  \\else\\ifx#2;\\nd*refa{#1}; \\def\\nd*c{\\nd*aux{}}%\n  \\else\\ifx#2.\\nd*refa{#1}. \\def\\nd*c{\\nd*aux{}}%\n  \\else\\ifx#2)\\nd*refa{#1})\\def\\nd*c{\\nd*aux{}}%\n  \\else\\ifx#2(\\nd*refa{#1}(\\def\\nd*c{\\nd*aux{}}%\n  \\else\\ifx#2\\nd*end\\nd*refa{#1}\\def\\nd*c{}%\n  \\else\\def\\nd*c{\\nd*aux{#1#2}}%\n  \\fi\\fi\\fi\\fi\\fi\\fi\\fi\\nd*c}\n\\def\\ndref#1{\\nd*aux{}#1\\nd*end}\n\n% Layer A\n\n% define various dimensions (explained in fitchdoc.tex):\n\\newlength{\\nd*dim} \n\\newdimen\\nd*depthdim\n\\newdimen\\nd*hsep\n\\newdimen\\ndindent\n\\ndindent=1em\n% user command to redefine dimensions\n\\def\\nddim#1#2#3#4#5#6#7#8{\\nd*depthdim=#3\\relax\\nd*hsep=#6\\relax%\n\\def\\nd*height{#1}\\def\\nd*thickness{#8}\\def\\nd*initheight{#2}%\n\\def\\nd*indent{#5}\\def\\nd*labelsep{#4}\\def\\nd*justsep{#7}}\n% set initial dimensions\n\\nddim{4.5ex}{3.5ex}{1.5ex}{1em}{1.6em}{.5em}{2.5em}{.2mm}\n\n\\def\\nd*v{\\rule[-\\nd*depthdim]{\\nd*thickness}{\\nd*height}}\n\\def\\nd*t{\\rule[-\\nd*depthdim]{0mm}{\\nd*height}\\rule[-\\nd*depthdim]{\\nd*thickness}{\\nd*initheight}}\n\\def\\nd*i{\\hspace{\\nd*indent}} \n\\def\\nd*s{\\hspace{\\nd*hsep}}\n\\def\\nd*g#1{\\nd*f{\\makebox[\\nd*indent][c]{$#1$}}}\n\\def\\nd*f#1{\\raisebox{0pt}[0pt][0pt]{$#1$}}\n\\def\\nd*u#1{\\makebox[0pt][l]{\\settowidth{\\nd*dim}{\\nd*f{#1}}%\n    \\addtolength{\\nd*dim}{2\\nd*hsep}\\hspace{-\\nd*hsep}\\rule[-\\nd*depthdim]{\\nd*dim}{\\nd*thickness}}\\nd*f{#1}}\n\n% Lists\n\n\\def\\nd*push#1#2{\\expandafter\\gdef\\expandafter#1\\expandafter%\n  {\\expandafter\\nd*cons\\expandafter{#1}{#2}}}\n\\def\\nd*pop#1{{\\def\\nd*nil{\\gdef#1{\\nd*nil}}\\def\\nd*cons##1##2%\n    {\\gdef#1{##1}}#1}}\n\\def\\nd*iter#1#2{{\\def\\nd*nil{}\\def\\nd*cons##1##2{##1#2{##2}}#1}}\n\\def\\nd*modify#1#2#3{{\\def\\nd*nil{\\gdef#1{\\nd*nil}}\\def\\nd*cons##1##2%\n    {\\advance#2-1 ##1\\advance#2 1 \\ifnum#2=1\\nd*push#1{#3}\\else%\n      \\nd*push#1{##2}\\fi}#1}}\n\n\\def\\nd*cont#1{{\\def\\nd*t{\\nd*v}\\def\\nd*v{\\nd*v}\\def\\nd*g##1{\\nd*i}%\n    \\def\\nd*i{\\nd*i}\\def\\nd*nil{\\gdef#1{\\nd*nil}}\\def\\nd*cons##1##2%\n    {##1\\expandafter\\nd*push\\expandafter#1\\expandafter{##2}}#1}}\n\n% Layer B\n\n\\newcount\\nd*n\n\\def\\nd*beginb{\\begingroup\\nd*reset\\gdef\\nd*stack{\\nd*nil}\\nd*push\\nd*stack{\\nd*t}%\n  \\begin{array}{l@{\\hspace{\\nd*labelsep}}l@{\\hspace{\\nd*justsep}}l}}\n\\def\\nd*resumeb{\\begingroup\\begin{array}{l@{\\hspace{\\nd*labelsep}}l@{\\hspace{\\nd*justsep}}l}}\n\\def\\nd*endb{\\end{array}\\endgroup}\n\\def\\nd*hypob#1#2{\\nd*f{\\nd*num{#1}}&\\nd*iter\\nd*stack\\relax\\nd*cont\\nd*stack\\nd*s\\nd*u{#2}&}\n\\def\\nd*haveb#1#2{\\nd*f{\\nd*num{#1}}&\\nd*iter\\nd*stack\\relax\\nd*cont\\nd*stack\\nd*s\\nd*f{#2}&}\n\\def\\nd*havecontb#1#2{&\\nd*iter\\nd*stack\\relax\\nd*cont\\nd*stack\\nd*s\\nd*f{\\hspace{\\ndindent}#2}&}\n\\def\\nd*hypocontb#1#2{&\\nd*iter\\nd*stack\\relax\\nd*cont\\nd*stack\\nd*s\\nd*u{\\hspace{\\ndindent}#2}&}\n\n\\def\\nd*openb{\\nd*push\\nd*stack{\\nd*i}\\nd*push\\nd*stack{\\nd*t}}\n\\def\\nd*closeb{\\nd*pop\\nd*stack\\nd*pop\\nd*stack}\n\\def\\nd*guardb#1#2{\\nd*n=#1\\multiply\\nd*n by 2 \\nd*modify\\nd*stack\\nd*n{\\nd*g{#2}}}\n\n% Layer C\n\n\\def\\nd*clr{\\gdef\\nd*cmd{}\\gdef\\nd*typ{\\relax}}\n\\def\\nd*sto#1#2#3{\\gdef\\nd*typ{#1}\\gdef\\nd*byt{}%\n  \\gdef\\nd*cmd{\\nd*typ{#2}{#3}\\nd*byt\\\\}}\n\\def\\nd*chtyp{\\expandafter\\ifx\\nd*typ\\nd*hypocontb\\def\\nd*typ{\\nd*havecontb}\\else\\def\\nd*typ{\\nd*haveb}\\fi}\n\\def\\nd*hypoc#1#2{\\nd*chtyp\\nd*cmd\\nd*sto{\\nd*hypob}{#1}{#2}}\n\\def\\nd*havec#1#2{\\nd*cmd\\nd*sto{\\nd*haveb}{#1}{#2}}\n\\def\\nd*hypocontc#1{\\nd*chtyp\\nd*cmd\\nd*sto{\\nd*hypocontb}{}{#1}}\n\\def\\nd*havecontc#1{\\nd*cmd\\nd*sto{\\nd*havecontb}{}{#1}}\n\\def\\nd*by#1#2{\\ifx\\nd*x#2\\nd*x\\gdef\\nd*byt{\\mbox{#1}}\\else\\gdef\\nd*byt{\\mbox{#1, \\ndref{#2}}}\\fi}\n\n% multi-line macros\n\\def\\nd*mhypoc#1#2{\\nd*mhypocA{#1}#2\\\\\\nd*stop\\\\}\n\\def\\nd*mhypocA#1#2\\\\{\\nd*hypoc{#1}{#2}\\nd*mhypocB}\n\\def\\nd*mhypocB#1\\\\{\\ifx\\nd*stop#1\\else\\nd*hypocontc{#1}\\expandafter\\nd*mhypocB\\fi}\n\\def\\nd*mhavec#1#2{\\nd*mhavecA{#1}#2\\\\\\nd*stop\\\\}\n\\def\\nd*mhavecA#1#2\\\\{\\nd*havec{#1}{#2}\\nd*mhavecB}\n\\def\\nd*mhavecB#1\\\\{\\ifx\\nd*stop#1\\else\\nd*havecontc{#1}\\expandafter\\nd*mhavecB\\fi}\n\\def\\nd*mhypocontc#1{\\nd*mhypocB#1\\\\\\nd*stop\\\\}\n\\def\\nd*mhavecontc#1{\\nd*mhavecB#1\\\\\\nd*stop\\\\}\n\n\\def\\nd*beginc{\\nd*beginb\\nd*clr}\n\\def\\nd*resumec{\\nd*resumeb\\nd*clr}\n\\def\\nd*endc{\\nd*cmd\\nd*endb}\n\\def\\nd*openc{\\nd*cmd\\nd*clr\\nd*openb}\n\\def\\nd*closec{\\nd*cmd\\nd*clr\\nd*closeb}\n\\let\\nd*guardc\\nd*guardb\n\n% Layer D\n\n% macros with optional arguments spelled-out\n\\def\\nd*hypod[#1][#2]#3[#4]#5{\\ifx\\relax#4\\relax\\else\\nd*guardb{1}{#4}\\fi\\nd*mhypoc{#3}{#5}\\nd*set{#1}{#2}}\n\\def\\nd*haved[#1][#2]#3[#4]#5{\\ifx\\relax#4\\relax\\else\\nd*guardb{1}{#4}\\fi\\nd*mhavec{#3}{#5}\\nd*set{#1}{#2}}\n\\def\\nd*havecont#1{\\nd*mhavecontc{#1}}\n\\def\\nd*hypocont#1{\\nd*mhypocontc{#1}}\n\\def\\nd*base{undefined}\n\\def\\nd*opend[#1]#2{\\nd*cmd\\nd*clr\\nd*openb\\nd*guard{#1}#2}\n\\def\\nd*close{\\nd*cmd\\nd*clr\\nd*closeb}\n\\def\\nd*guardd[#1]#2{\\nd*guardb{#1}{#2}}\n\n% Handling of optional arguments.\n\n\\def\\nd*optarg#1#2#3{\\ifx[#3\\def\\nd*c{#2#3}\\else\\def\\nd*c{#2[#1]{#3}}\\fi\\nd*c}\n\\def\\nd*optargg#1#2#3{\\ifx[#3\\def\\nd*c{#1#3}\\else\\def\\nd*c{#2{#3}}\\fi\\nd*c}\n\n\\def\\nd*five#1{\\nd*optargg{\\nd*four{#1}}{\\nd*two{#1}}}\n\\def\\nd*four#1[#2]{\\nd*optarg{0}{\\nd*three{#1}[#2]}}\n\\def\\nd*three#1[#2][#3]#4{\\nd*optarg{}{#1[#2][#3]{#4}}}\n\\def\\nd*two#1{\\nd*three{#1}[\\relax][]}\n\n\\def\\nd*have{\\nd*five{\\nd*haved}}\n\\def\\nd*hypo{\\nd*five{\\nd*hypod}}\n\\def\\nd*open{\\nd*optarg{}{\\nd*opend}}\n\\def\\nd*guard{\\nd*optarg{1}{\\nd*guardd}}\n\n\\def\\nd*init{%\n  \\let\\open\\nd*open%\n  \\let\\close\\nd*close%\n  \\let\\hypo\\nd*hypo%\n  \\let\\have\\nd*have%\n  \\let\\hypocont\\nd*hypocont%\n  \\let\\havecont\\nd*havecont%\n  \\let\\by\\nd*by%\n  \\let\\guard\\nd*guard%\n  \\def\\ii{\\by{$\\Rightarrow$I}}%\n  \\def\\ie{\\by{$\\Rightarrow$E}}%\n  \\def\\Ai{\\by{$\\forall$I}}%\n  \\def\\Ae{\\by{$\\forall$E}}%\n  \\def\\Ei{\\by{$\\exists$I}}%\n  \\def\\Ee{\\by{$\\exists$E}}%\n  \\def\\ai{\\by{$\\wedge$I}}%\n  \\def\\ae{\\by{$\\wedge$E}}%\n  \\def\\ai{\\by{$\\wedge$I}}%\n  \\def\\ae{\\by{$\\wedge$E}}%\n  \\def\\oi{\\by{$\\vee$I}}%\n  \\def\\oe{\\by{$\\vee$E}}%\n  \\def\\ni{\\by{$\\neg$I}}%\n  \\def\\ne{\\by{$\\neg$E}}%\n  \\def\\be{\\by{$\\bot$E}}%\n  \\def\\nne{\\by{$\\neg\\neg$E}}%\n  \\def\\r{\\by{R}}%\n}\n\n\\newenvironment{nd}{\\begingroup\\nd*init\\nd*beginc}{\\nd*endc\\endgroup}\n\\newenvironment{ndresume}{\\begingroup\\nd*init\\nd*resumec}{\\nd*endc\\endgroup}\n\n\\catcode`\\*=\\nd*astcode\n\n% End of file fitch.sty\n\n"
  },
  {
    "path": "img/scripts/gen_cnn_analysis.py",
    "content": "from pylab import figure, ylabel, xticks, bar, \\\n                  legend, savefig, text\n\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 10, 5\n\ngroups = [\"Exact match\", \"Paraphrasing\", \"Partial clue\", \"Multi. sent.\", \"Coref. errors\", \"Hard\", \"All\"]\n\ndata = [[100, 95.1, 89.5, 50.0, 37.5, 5.9, 74.0],\n        [100, 78.1, 73.7, 50.0, 50.0, 11.8, 66.0]]\n\nfigure()\nylabel('Accuracy')\n\nx1 = [2.0 + 10.0 * k for k in range(7)]\n\nxticks([x + 0.75 for x in x1], groups)\n\nwidth = 2.5\nbar(x1, data[0], width=width, color=\"#56B4E9\", label=\"Stanford Attentive Reader\", edgecolor='k')\nbar([x + width for x in x1], data[1], width=width, color=\"#E69F00\", label=\"Feature-based Classifier\", edgecolor='k')\n# bar([x + 1.0 for x in x1], data[2], width=0.45, color=\"#94c6da\", label=\"WebQuestions\", edgecolor='k')\n# bar([x + 1.5 for x in x1], data[3], width=0.45, color=\"#1770ab\", label=\"WikiMovies\", edgecolor='k')\n\nfor j in range(len(data[0])):\n    text(x1[j] - 1.5, data[0][j] + 0.75, str(data[0][j]))\n\nfor j in range(len(data[1])):\n    text(x1[j] + width - 1.0, data[1][j] + 0.75, str(data[1][j]))\n\nlegend()\nsavefig('barplot.png')\n"
  },
  {
    "path": "img/scripts/gen_qa_stat.py",
    "content": "from pylab import figure, ylabel, xticks, bar, \\\n                  legend, savefig, text\n\ngroups = [\"Question length\", \"Answer length\"]\n\ndata = [\n    [10.4, 3.2],\n    [7.2, 1.8],\n    [6.7, 2.4],\n    [7.5, 2.1]\n    ]\n\nfigure()\nylabel('#tokens')\n\nx1 = [2.0, 5.0]\n\nxticks([x + 0.75 for x in x1], groups)\n\nbar(x1, data[0], width=0.45, color=\"#c30d24\", label=\"SQuAD\", edgecolor='k')\nbar([x + 0.5 for x in x1], data[1], width=0.45, color=\"#cccccc\", label=\"TREC\", edgecolor='k')\nbar([x + 1.0 for x in x1], data[2], width=0.45, color=\"#94c6da\", label=\"WebQuestions\", edgecolor='k')\nbar([x + 1.5 for x in x1], data[3], width=0.45, color=\"#1770ab\", label=\"WikiMovies\", edgecolor='k')\n\nfor i in range(len(data)):\n    for j in range(len(data[i])):\n        text(x1[j] + i * 0.5 - 0.1, data[i][j] + 0.1, str(data[i][j]))\n\nlegend()\nsavefig('barplot.png')\n"
  },
  {
    "path": "img/scripts/gen_squad_progress.py",
    "content": "import matplotlib\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nimport datetime as dt\nmatplotlib.rcParams['legend.handlelength'] = 0\nmatplotlib.rcParams['legend.numpoints'] = 1\n\n\ndef isfloat(value):\n    try:\n        float(value)\n        return True\n    except ValueError:\n        return False\n\n\nmapping = {'Jan': 1, 'Feb': 2, 'Mar': 3, 'Apr': 4, 'May': 5, 'Jun': 6,\n           'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12}\n\n\ndef get_month(s):\n    assert s in mapping\n    return mapping[s]\n\n\ndef earlier(date1, date2):\n    if date1[0] != date2[0]:\n        return date1[0] < date2[0]\n    if date1[1] != date2[1]:\n        return date1[1] < date2[1]\n    return False\n\n\nrecords = []\nwith open('squad_leaderboard.txt') as f:\n    for line in f.readlines():\n        sp = line.strip().split('\\t')\n        if len(sp) == 2 and ('2016' in sp[0] or '2017' in sp[0] or '2018' in sp[0]):\n            date = sp[0]\n            system = sp[1]\n        if len(sp) >= 2 and isfloat(sp[-2]) and isfloat(sp[-1]):\n            em = float(sp[-2])\n            f1 = float(sp[-1])\n\n            if 'ensemble' not in system:\n                print('-' * 100)\n                year = int(date.split(' ')[-1])\n                month = get_month(date.split(' ')[0])\n                date = int(date.split(' ')[1][:-1])\n\n                print(year, month)\n                print(system)\n                print(em, f1)\n\n                if len(records) == 0 or earlier((year, month), (records[-1][0], records[-1][1])):\n                    records.append((year, month, date, f1))\nprint(records)\nrecords.append((2016, 6, 16, 51.0))\n\nx = []\ny = []\nfor rec in records:\n    x.append(dt.date(rec[0], rec[1], rec[2]))\n    y.append(rec[3])\n\nplt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))\nplt.plot([x[-1], x[-2]], [y[-1], y[-2]], '-b', marker='v', label='non-neural', markevery=5)\nplt.plot(x[:-1], y[:-1], '-b', marker='x', label='neural')\nplt.gca().set_ylim([45.0, 100.0])\nplt.gcf().autofmt_xdate()\nplt.ylabel('F1')\nplt.text(x[-1], 92.0, 'human: 91.2')\nplt.legend(loc=4)\nplt.axhline(y=91.221, color='r', linestyle='--')\nplt.savefig('squad_progress.png')\n"
  },
  {
    "path": "img/scripts/gen_timeline.py",
    "content": "\"\"\"\n    Fourth example on:\n    https://kristw.github.io/d3kit-timeline/\n\n    Labella.py doesn't implement the full colorscale from d3, just category 10\n    and category 20. Below we show an example of implementing specific colors\n    based on the input data.\n\"\"\"\n\nfrom datetime import date\n\nfrom labella.timeline import TimelineTex, TimelineSVG\n\n\ndef color(d):\n    idx = d['episode']\n    if idx == 1:\n        return '#000000'\n    else:\n        return '#1F77B4'\n\n\ndef main():\n    items = [\n            {'time': date(2016, 11, 1), 'episode': 1,\n                'text': 'SQuAD 1.1'},\n            {'time': date(2015, 12, 7), 'episode': 1,\n                'text': 'CNN/Daily Mail'},\n            {'time': date(2016, 5, 2), 'episode': 1,\n                'text': 'Children Book Test'},\n            {'time': date(2017, 7, 30), 'episode': 1,\n                'text': 'TriviaQA'},\n            {'time': date(2017, 9, 1), 'episode': 1,\n                'text': 'RACE'},\n            {'time': date(2018, 5, 1), 'episode': 1,\n                'text': 'NarrativeQA'},\n            {'time': date(2018, 7, 15), 'episode': 1,\n                'text': 'SQuAD 2.0'},\n            {'time': date(2018, 11, 2), 'episode': 1,\n                'text': 'HotpotQA'},\n            {'time': date(2015, 12, 7), 'episode': 2,\n                'text': 'Attentive Reader'},\n            {'time': date(2016, 8, 7), 'episode': 2,\n                'text': 'Stanford Attentive Reader'},\n            {'time': date(2017, 4, 24), 'episode': 2,\n                'text': 'Match-LSTM'},\n            {'time': date(2017, 4, 24), 'episode': 2,\n                'text': 'BiDAF'},\n            {'time': date(2017, 7, 30), 'episode': 2,\n                'text': 'R-Net'},\n            {'time': date(2018, 4, 30), 'episode': 2,\n                'text': 'QANet'},\n            {'time': date(2018, 6, 1), 'episode': 2,\n                'text': 'BiDAF+self-att.+ELMo'},\n            {'time': date(2018, 10, 11), 'episode': 2,\n                'text': 'BERT'},\n            ]\n\n    options = {\n        'initialWidth': 400,\n        'initialHeight': 400,\n        'direction': 'right',\n        'dotColor': color,\n        'labelBgColor': color,\n        'linkColor': color,\n        'textFn': lambda d: d['text'],\n        'margin': {'left': 0, 'right': 0, 'top': 0, 'bottom': 0},\n        'layerGap': 20,\n        'labella': {\n            'maxPos': 1200,\n            'algorithm': 'simple'\n            }\n        }\n\n    tl = TimelineSVG(items, options=options)\n    tl.export('timeline.svg')\n\n    tl = TimelineTex(items, options=options)\n    tl.export('timeline.tex')\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "img/scripts/squad_leaderboard.txt",
    "content": "1\nOct 05, 2018\tBERT (ensemble)\nGoogle AI Language\nhttps://arxiv.org/abs/1810.04805\t87.433\t93.160\n2\nOct 05, 2018\tBERT (single model)\nGoogle AI Language\nhttps://arxiv.org/abs/1810.04805\t85.083\t91.835\n2\nSep 09, 2018\tnlnet (ensemble)\nMicrosoft Research Asia\n85.356\t91.202\n2\nSep 26, 2018\tnlnet (ensemble)\nMicrosoft Research Asia\n85.954\t91.677\n3\nJul 11, 2018\tQANet (ensemble)\nGoogle Brain & CMU\n84.454\t90.490\n4\nJul 08, 2018\tr-net (ensemble)\nMicrosoft Research Asia\n84.003\t90.147\n5\nMar 19, 2018\tQANet (ensemble)\nGoogle Brain & CMU\n83.877\t89.737\n5\nSep 09, 2018\tnlnet (single model)\nMicrosoft Research Asia\n83.468\t90.133\n5\nJun 20, 2018\tMARS (ensemble)\nYUANFUDAO research NLP\n83.982\t89.796\n6\nSep 01, 2018\tMARS (single model)\nYUANFUDAO research NLP\n83.185\t89.547\n7\nJan 03, 2018\tr-net+ (ensemble)\nMicrosoft Research Asia\n82.650\t88.493\n7\nMay 09, 2018\tMARS (single model)\nYUANFUDAO research NLP\n82.587\t88.880\n7\nFeb 19, 2018\tReinforced Mnemonic Reader + A2D (ensemble model)\nMicrosoft Research Asia & NUDT\n82.849\t88.764\n7\nJan 22, 2018\tHybrid AoA Reader (ensemble)\nJoint Laboratory of HIT and iFLYTEK Research\n82.482\t89.281\n7\nJun 20, 2018\tQANet (single)\nGoogle Brain & CMU\n82.471\t89.306\n7\nMar 06, 2018\tQANet (ensemble)\nGoogle Brain & CMU\n82.744\t89.045\n7\nJun 21, 2018\tMARS (single model)\nYUANFUDAO research NLP\n83.122\t89.224\n8\nJan 05, 2018\tSLQA+ (ensemble)\nAlibaba iDST NLP\n82.440\t88.607\n9\nFeb 02, 2018\tReinforced Mnemonic Reader (ensemble model)\nNUDT and Fudan University\nhttps://arxiv.org/abs/1705.02798\t82.283\t88.533\n9\nFeb 27, 2018\tQANet (single model)\nGoogle Brain & CMU\n82.209\t88.608\n10\nDec 22, 2017\tAttentionReader+ (ensemble)\nTencent DPDAC NLP\n81.790\t88.163\n11\nMay 09, 2018\tReinforced Mnemonic Reader + A2D (single model)\nMicrosoft Research Asia & NUDT\n81.538\t88.130\n11\nDec 17, 2017\tr-net (ensemble)\nMicrosoft Research Asia\nhttp://aka.ms/rnet\t82.136\t88.126\n12\nMay 09, 2018\tReinforced Mnemonic Reader + A2D + DA (single model)\nMicrosoft Research Asia & NUDT\n81.401\t88.122\n12\nApr 23, 2018\tr-net (single model)\nMicrosoft Research Asia\n81.391\t88.170\n12\nApr 03, 2018\tKACTEIL-MRC(GF-Net+) (ensemble)\nKangwon National University, Natural Language Processing Lab.\n81.496\t87.557\n12\nFeb 27, 2018\tQANet (single model)\nGoogle Brain & CMU\n80.929\t87.773\n12\nNov 17, 2017\tBiDAF + Self Attention + ELMo (ensemble)\nAllen Institute for Artificial Intelligence\n81.003\t87.432\n12\nFeb 19, 2018\tReinforced Mnemonic Reader + A2D (single model)\nMicrosoft Research Asia & NUDT\n80.919\t87.492\n13\nFeb 12, 2018\tReinforced Mnemonic Reader + A2D (single model)\nMicrosoft Research Asia & NUDT\n80.489\t87.454\n13\nApr 12, 2018\tAVIQA+ (ensemble)\naviqa team\n80.615\t87.311\n14\nMar 20, 2018\tDNET (ensemble)\nQA geeks\n80.164\t86.721\n14\nJan 22, 2018\tHybrid AoA Reader (single model)\nJoint Laboratory of HIT and iFLYTEK Research\n80.027\t87.288\n14\nJan 12, 2018\tEAZI+ (ensemble)\nYiwise NLP Group\n80.426\t86.912\n14\nJan 13, 2018\tSLQA+\nsingle model\n80.436\t87.021\n14\nJan 04, 2018\t{EAZI} (ensemble)\nYiwise NLP Group\n80.436\t86.912\n15\nFeb 12, 2018\tBiDAF + Self Attention + ELMo + A2D (single model)\nMicrosoft Research Asia & NUDT\n79.996\t86.711\n16\nJan 29, 2018\tReinforced Mnemonic Reader (single model)\nNUDT and Fudan University\nhttps://arxiv.org/abs/1705.02798\t79.545\t86.654\n16\nApr 10, 2018\tUnnamed submission by null\n80.027\t86.612\n16\nFeb 23, 2018\tMAMCN+ (single model)\nSamsung Research\n79.692\t86.727\n16\nJan 03, 2018\tr-net+ (single model)\nMicrosoft Research Asia\n79.901\t86.536\n16\nDec 28, 2017\tSLQA+ (single model)\nAlibaba iDST NLP\n79.199\t86.590\n16\nDec 05, 2017\tSAN (ensemble model)\nMicrosoft Business AI Solutions Team\nhttps://arxiv.org/abs/1712.03556\t79.608\t86.496\n17\nOct 17, 2017\tInteractive AoA Reader+ (ensemble)\nJoint Laboratory of HIT and iFLYTEK\n79.083\t86.450\n18\nOct 24, 2017\tFusionNet (ensemble)\nMicrosoft Business AI Solutions Team\nhttps://arxiv.org/abs/1711.07341\t78.978\t86.016\n18\nJun 01, 2018\tMDReader\nsingle model\n79.031\t86.006\n18\nFeb 01, 2018\tUnnamed submission by null\n78.999\t86.151\n19\nOct 24, 2018\tWDNet (single model)\nBeijing Normal University\n78.926\t85.810\n19\nOct 22, 2017\tDCN+ (ensemble)\nSalesforce Research\nhttps://arxiv.org/abs/1711.00106\t78.852\t85.996\n20\nMar 29, 2018\tKACTEIL-MRC(GF-Net+) (single model)\nKangwon National University, Natural Language Processing Lab.\n78.664\t85.780\n20\nNov 03, 2017\tBiDAF + Self Attention + ELMo (single model)\nAllen Institute for Artificial Intelligence\n78.580\t85.833\n21\nMay 09, 2018\tKakaoNet (single model)\nKakao NLP Team\n78.401\t85.724\n22\nNov 30, 2017\tSLQA(ensemble)\nAlibaba iDST NLP\n78.328\t85.682\n22\nJan 02, 2018\tConductor-net (ensemble)\nCMU\nhttps://arxiv.org/abs/1710.10504\t78.433\t85.517\n22\nJun 01, 2018\tMDReader0\nsingle model\n78.171\t85.543\n22\nSep 18, 2018\tBiDAF++ with pair2vec (single model)\nUW and FAIR\n78.223\t85.535\n22\nJan 03, 2018\tMEMEN (single model)\nZhejiang University\nhttps://arxiv.org/abs/1707.09098\t78.234\t85.344\n22\nMar 19, 2018\taviqa (ensemble)\naviqa team\n78.496\t85.469\n23\nJan 29, 2018\ttest\nsingle\n78.087\t85.348\n24\nJul 25, 2017\tInteractive AoA Reader (ensemble)\nJoint Laboratory of HIT and iFLYTEK Research\n77.845\t85.297\n25\nJan 10, 2018\tUnnamed submission by null\n77.436\t85.130\n26\nDec 06, 2017\tAttentionReader+ (single)\nTencent DPDAC NLP\n77.342\t84.925\n26\nSep 18, 2018\tBiDAF++ (single model)\nUW and FAIR\n77.573\t84.858\n26\nDec 13, 2017\tRaSoR + TR + LM (single model)\nTel-Aviv University\nhttps://arxiv.org/abs/1712.03609\t77.583\t84.163\n26\nApr 10, 2018\tUnnamed submission by null\n77.489\t84.735\n26\nMar 20, 2018\tDNET (single model)\nQA geeks\n77.646\t84.905\n27\nNov 06, 2017\tConductor-net (ensemble)\nCMU\nhttps://arxiv.org/abs/1710.10504\t76.996\t84.630\n27\nSep 26, 2018\t{gqa} (single model)\nFAIR\n77.090\t83.931\n27\nDec 21, 2017\tJenga (ensemble)\nFacebook AI Research\n77.237\t84.466\n27\nJan 23, 2018\tMARS (single model)\nYUANFUDAO research NLP\n76.859\t84.739\n28\nNov 01, 2017\tSAN (single model)\nMicrosoft Business AI Solutions Team\nhttps://arxiv.org/abs/1712.03556\t76.828\t84.396\n29\nOct 13, 2017\tr-net (single model)\nMicrosoft Research Asia\nhttp://aka.ms/rnet\t76.461\t84.265\n29\nDec 19, 2017\tFRC (single model)\nin review\n76.240\t84.599\n29\nMay 14, 2018\tVS^3-NET (single model)\nKangwon National University in South Korea\n76.775\t84.491\n30\nOct 22, 2017\tConductor-net (ensemble)\nCMU\n76.146\t83.991\n31\nSep 08, 2017\tFusionNet (single model)\nMicrosoft Business AI Solutions team\nhttps://arxiv.org/abs/1711.07341\t75.968\t83.900\n31\nOct 18, 2018\tKAR (single model)\nYork University\nhttps://arxiv.org/abs/1809.03449\t76.125\t83.538\n32\nJul 14, 2017\tsmarnet (ensemble)\nEigen Technology & Zhejiang University\n75.989\t83.475\n32\nOct 22, 2017\tInteractive AoA Reader+ (single model)\nJoint Laboratory of HIT and iFLYTEK\n75.821\t83.843\n32\nMar 15, 2018\tAVIQA-v2 (single model)\naviqa team\n75.926\t83.305\n33\nOct 05, 2018\tUnnamed submission by null\n74.950\t83.294\n33\nAug 18, 2017\tRaSoR + TR (single model)\nTel-Aviv University\nhttps://arxiv.org/abs/1712.03609\t75.789\t83.261\n34\nOct 23, 2017\tDCN+ (single model)\nSalesforce Research\nhttps://arxiv.org/abs/1711.00106\t75.087\t83.081\n35\nFeb 13, 2018\tSSR-BiDAF\nensemble model\n74.541\t82.477\n35\nNov 01, 2017\tMixed model (ensemble)\nSean\n75.265\t82.769\n36\nJan 02, 2018\tConductor-net (single model)\nCMU\nhttps://arxiv.org/abs/1710.10504\t74.405\t82.742\n36\nNov 17, 2017\ttwo-attention-self-attention (ensemble)\nguotong1988\n75.223\t82.716\n36\nMay 21, 2017\tMEMEN (ensemble)\nEigen Technology & Zhejiang University\nhttps://arxiv.org/abs/1707.09098\t75.370\t82.658\n37\nMar 09, 2017\tReasoNet (ensemble)\nMSR Redmond\nhttps://arxiv.org/abs/1609.05284\t75.034\t82.552\n38\nAug 14, 2018\teeAttNet (single model)\nBBD NLP Team\nhttps://www.bbdservice.com\t74.604\t82.501\n38\nJul 10, 2017\tDCN+ (single model)\nSalesforce Research\nhttps://arxiv.org/abs/1711.00106\t74.866\t82.806\n38\nFeb 06, 2018\tJenga (single model)\nFacebook AI Research\n74.373\t82.845\n38\nOct 27, 2017\tUnnamed submission by null\n74.489\t82.312\n38\nOct 31, 2017\tSLQA (single model)\nAlibaba iDST NLP\n74.489\t82.815\n39\nJul 14, 2017\tMnemonic Reader (ensemble)\nNUDT and Fudan University\nhttps://arxiv.org/abs/1705.02798\t74.268\t82.371\n40\nDec 23, 2017\tS^3-Net (ensemble)\nKangwon National University in South Korea\n74.121\t82.342\n41\nJul 25, 2017\tInteractive AoA Reader (single model)\nJoint Laboratory of HIT and iFLYTEK Research\n73.639\t81.931\n41\nJul 29, 2017\tSEDT (ensemble model)\nCMU\nhttps://arxiv.org/abs/1703.00572\t74.090\t81.761\n42\nDec 14, 2017\tJenga (single model)\nFacebook AI Research\n73.303\t81.754\n42\nNov 06, 2017\tConductor-net (single)\nCMU\nhttps://arxiv.org/abs/1710.10504\t73.240\t81.933\n42\nApr 22, 2017\tSEDT+BiDAF (ensemble)\nCMU\nhttps://arxiv.org/abs/1703.00572\t73.723\t81.530\n42\nFeb 22, 2017\tBiDAF (ensemble)\nAllen Institute for AI & University of Washington\nhttps://arxiv.org/abs/1611.01603\t73.744\t81.525\n42\nJan 24, 2017\tMulti-Perspective Matching (ensemble)\nIBM Research\nhttps://arxiv.org/abs/1612.04211\t73.765\t81.257\n42\nJul 06, 2017\tSSAE (ensemble)\nTsinghua University\n74.080\t81.665\n43\nMay 01, 2017\tjNet (ensemble)\nUSTC & National Research Council Canada & York University\nhttps://arxiv.org/abs/1703.04617\t73.010\t81.517\n44\nOct 22, 2017\tConductor-net (single)\nCMU\n72.590\t81.415\n44\nApr 17, 2018\tUnnamed submission by null\n72.831\t80.622\n44\nNov 16, 2017\ttwo-attention-self-attention (single model)\nguotong1988\n72.600\t81.011\n44\nApr 12, 2017\tT-gating (ensemble)\nPeking University\n72.758\t81.001\n44\nSep 20, 2017\tBiDAF + Self Attention (single model)\nAllen Institute for Artificial Intelligence\nhttps://arxiv.org/abs/1710.10723\t72.139\t81.048\n45\nDec 15, 2017\tS^3-Net (single model)\nKangwon National University in South Korea\n71.908\t81.023\n45\nApr 17, 2018\tUnnamed submission by null\n72.831\t80.622\n46\nMar 03, 2018\tAVIQA (single model)\naviqa team\n72.485\t80.550\n47\nNov 06, 2017\tattention+self-attention (single model)\nguotong1988\n71.698\t80.462\n48\nNov 01, 2016\tDynamic Coattention Networks (ensemble)\nSalesforce Research\nhttps://arxiv.org/abs/1611.01604\t71.625\t80.383\n49\nJul 14, 2017\tsmarnet (single model)\nEigen Technology & Zhejiang University\nhttps://arxiv.org/abs/1710.02772\t71.415\t80.160\n49\nApr 13, 2017\tQFASE\nNUS\n71.898\t79.989\n50\nOct 27, 2017\tM-NET (single)\nUFL\n71.016\t79.835\n50\nApr 22, 2018\tMAMCN (single model)\nSamsung Research\n70.985\t79.939\n50\nJul 14, 2017\tMnemonic Reader (single model)\nNUDT and Fudan University\nhttps://arxiv.org/abs/1705.02798\t70.995\t80.146\n50\nMay 23, 2018\tAttReader (single)\nCollege of Computer & Information Science, SouthWest University, Chongqing, China\n71.373\t79.725\n50\nMar 24, 2017\tjNet (single model)\nUSTC & National Research Council Canada & York University\nhttps://arxiv.org/abs/1703.04617\t70.607\t79.821\n50\nApr 02, 2017\tRuminating Reader (single model)\nNew York University\nhttps://arxiv.org/abs/1704.07415\t70.639\t79.456\n50\nMar 14, 2017\tDocument Reader (single model)\nFacebook AI Research\nhttps://arxiv.org/abs/1704.00051\t70.733\t79.353\n50\nDec 28, 2016\tFastQAExt\nGerman Research Center for Artificial Intelligence\nhttps://arxiv.org/abs/1703.04816\t70.849\t78.857\n50\nMay 13, 2017\tRaSoR (single model)\nGoogle NY, Tel-Aviv University\nhttps://arxiv.org/abs/1611.01436\t70.849\t78.741\n50\nMar 08, 2017\tReasoNet (single model)\nMSR Redmond\nhttps://arxiv.org/abs/1609.05284\t70.555\t79.364\n51\nApr 14, 2017\tMulti-Perspective Matching (single model)\nIBM Research\nhttps://arxiv.org/abs/1612.04211\t70.387\t78.784\n52\nAug 30, 2017\tSimpleBaseline (single model)\nTechnical University of Vienna\n69.600\t78.236\n52\nFeb 05, 2018\tSSR-BiDAF\nsingle model\n69.443\t78.358\n53\nApr 12, 2017\tSEDT+BiDAF (single model)\nCMU\nhttps://arxiv.org/abs/1703.00572\t68.478\t77.971\n54\nJun 25, 2017\tPQMN (single model)\nKAIST & AIBrain & Crosscert\n68.331\t77.783\n55\nApr 12, 2017\tT-gating (single model)\nPeking University\n68.132\t77.569\n56\nNov 28, 2016\tBiDAF (single model)\nAllen Institute for AI & University of Washington\nhttps://arxiv.org/abs/1611.01603\t67.974\t77.323\n56\nFeb 22, 2018\tUnnamed submission by null\n68.478\t77.220\n57\nFeb 22, 2018\tUnnamed submission by null\n68.425\t77.077\n57\nDec 28, 2016\tFastQA\nGerman Research Center for Artificial Intelligence\nhttps://arxiv.org/abs/1703.04816\t68.436\t77.070\n57\nJul 29, 2017\tSEDT (single model)\nCMU\nhttps://arxiv.org/abs/1703.00572\t68.163\t77.527\n58\nOct 26, 2016\tMatch-LSTM with Ans-Ptr (Boundary) (ensemble)\nSingapore Management University\nhttps://arxiv.org/abs/1608.07905\t67.901\t77.022\n58\nJan 22, 2018\tFABIR (Single Model)\nin review\n67.744\t77.605\n59\nSep 19, 2017\tAllenNLP BiDAF (single model)\nAllen Institute for AI\nhttp://allennlp.org/\t67.618\t77.151\n60\nFeb 05, 2017\tIterative Co-attention Network\nFudan University\n67.502\t76.786\n61\nJan 03, 2018\tnewtest\nsingle model\n66.527\t75.787\n61\nNov 01, 2016\tDynamic Coattention Networks (single model)\nSalesforce Research\nhttps://arxiv.org/abs/1611.01604\t66.233\t75.896\n62\nFeb 24, 2018\tUnnamed submission by null\n65.992\t75.469\n63\nJan 10, 2018\tUnnamed submission by null\n64.796\t74.272\n64\nDec 09, 2017\tUnnamed submission by ravioncodalab\n64.439\t73.921\n64\nOct 26, 2016\tMatch-LSTM with Bi-Ans-Ptr (Boundary)\nSingapore Management University\nhttps://arxiv.org/abs/1608.07905\t64.744\t73.743\n65\nFeb 19, 2017\tAttentive CNN context with LSTM\nNLPR, CASIA\n63.306\t73.463\n66\nNov 02, 2016\tFine-Grained Gating\nCarnegie Mellon University\nhttps://arxiv.org/abs/1611.01724\t62.446\t73.327\n66\nSep 21, 2017\tOTF dict+spelling (single)\nUniversity of Montreal\nhttps://arxiv.org/abs/1706.00286\t64.083\t73.056\n67\nSep 21, 2017\tOTF spelling (single)\nUniversity of Montreal\nhttps://arxiv.org/abs/1706.00286\t62.897\t72.016\n68\nSep 21, 2017\tOTF spelling+lemma (single)\nUniversity of Montreal\nhttps://arxiv.org/abs/1706.00286\t62.604\t71.968\n69\nSep 28, 2016\tDynamic Chunk Reader\nIBM\nhttps://arxiv.org/abs/1610.09996\t62.499\t70.956\n70\nAug 27, 2016\tMatch-LSTM with Ans-Ptr (Boundary)\nSingapore Management University\nhttps://arxiv.org/abs/1608.07905\t60.474\t70.695\n71\nSep 11, 2018\tUnnamed submission by Will_Wu\n59.058\t69.436\n72\nJan 10, 2018\tUnnamed submission by null\n58.764\t69.276\n73\nAug 27, 2016\tMatch-LSTM with Ans-Ptr (Sentence)\nSingapore Management University\nhttps://arxiv.org/abs/1608.07905\t54.505\t67.748\n74\nOct 26, 2018\tUnnamed submission by minjoon\n52.533\t62.757\n"
  },
  {
    "path": "intro.tex",
    "content": "%!TEX root = thesis.tex\n\n\\section{Motivation}\n\nTeaching machines to understand human language documents is one of the most elusive and long-standing challenges in Artificial Intelligence. Before we proceed, we must ask what it means to understand human language? Figure~\\ref{fig:mctest-example} demonstrates a children's story from the \\sys{MCTest} dataset~\\cite{richardson2013mctest}, with simple vocabulary and grammar. To process such a passage of text, the NLP community has put decades of effort into solving different tasks for various aspects of text understanding, including:\n\\begin{enumerate}[(a)]\n    \\item\n        \\tf{part-of-speech tagging}. It requires our machines to understand that, for example, in the first sentence \\ti{Alyssa got to the beach after a long trip.}, \\ti{Alyssa} is a proper noun, \\ti{beach} and \\ti{trip} are common nouns, \\ti{got} is a verb in its past tense, \\ti{long} is an adjective, \\ti{after} is a preposition.\n    \\item\n        \\tf{named entity recognition}. Our machines also should understand that \\ti{Alyssa}, \\ti{Ellen}, \\ti{Kristen} are the names of people in the story while \\ti{Charlotte}, \\ti{Atlanta} and \\ti{Miami} are the names of locations.\n    \\item\n        \\tf{syntactic parsing}. To understand the meaning of each single sentence, our machines also need to understand the relationship between words, or the syntactical (grammatical) structure. Taking the first sentence \\ti{Alyssa got to the beach after a long trip.} as an example again, the machines should understand that \\ti{Alyssa} is the subject, and \\ti{beach} is the object of the verb \\ti{got}, while \\ti{after a long trip} as a whole is a prepositional phrase which describes a temporal relationship with the verb.\n    \\item\n        \\tf{coreference resolution}. Furthermore, our machines even need to understand the interplay between sentences. For example, the mention \\ti{She} in the sentence \\ti{She's now in Miami} refers to \\ti{Alyssa} mentioned in the first sentence, while the mention \\ti{The girls} refers to \\ti{Alyssa, Ellen, Kristen and Rachel} in the previous sentences.\n\\end{enumerate}\n\n\\begin{figure}[!t]\n\\center\n\\begin{tabular}{l p{13cm}}\n\\toprule\n    &{\\tf{Alyssa}} got to the beach after a long trip. She's from Charlotte. She traveled from Atlanta. She's now in Miami. She went to Miami to visit some friends. But she wanted some time to herself at the beach, so she went there first. After going swimming and laying out, she went to her friend \\tf{Ellen}'s house. \\tf{Ellen} greeted {\\tf{Alyssa}} and they both had some lemonade to drink. {\\tf{Alyssa}} called her friends \\tf{Kristen} and \\tf{Rachel} to meet at \\tf{Ellen}'s house. The girls traded stories and caught up on their lives. It was a happy time for everyone. The girls went to a restaurant for dinner. The restaurant had a special on catfish. \\tf{Alyssa} enjoyed the restaurant's special. \\tf{Ellen} ordered a salad. \\tf{Kristen} had soup. \\tf{Rachel} had a steak. After eating, the ladies went back to \\tf{Ellen}'s house to have fun. They had lots of fun. They stayed the night because they were tired. {\\tf{Alyssa}} was happy to spend time with her friends again. \\\\\n\\midrule\n  (a) & \\tf{Question:} What city is Alyssa in? \\\\\n  &\\tf{Answer}: Miami \\\\\n\\vspace{0.25em}\n  (b) &\\tf{Question}: What did Alyssa eat at the restaurant? \\\\\n  & \\tf{Answer}: catfish \\\\\n\\vspace{0.25em}\n  (c) &\\tf{Question}: How many friends does Alyssa have in this story? \\\\\n  & \\tf{Answer}: 3 \\\\\n\\bottomrule\n\\end{tabular}\n\\longcaption{A sample story and comprehension questions from \\sys{MCTest}}{\\label{fig:mctest-example} A sample story and comprehension questions from the \\sys{MCTest} dataset  \\\\ \\cite{richardson2013mctest}.}\n\\end{figure}\n\nIs there a comprehensive evaluation that can test all these aspects and probe even deeper levels of understanding? We argue that the task of \\tf{reading comprehension} --- answering comprehension questions over a passage of text --- is a proper and important way to approach that. Just as we use reading comprehension tests to measure how well a human has understood a piece of text, we believe that it can play the same role for evaluating how well computer systems understand human language.\n\nLet's take a closer look at some comprehension questions posed on the same passage (Figure~\\ref{fig:mctest-example}):\n\\begin{enumerate}[(a)]\n    \\item\n        To answer the first question \\ti{What city is Alyssa in?}, our machines need to pick out the sentence \\ti{She's now in Miami.}, and resolve the \\ti{coreference resolution} problem that \\ti{She} refers to \\ti{Alyssa}, and then finally give the correct answer \\ti{Miami}.\n    \\item\n        For the second question \\ti{What did Alyssa eat at the restaurant?}, they need to first locate the two sentences \\ti{The restaurant had a special on catfish.} and \\ti{Alyssa enjoyed the restaurant's special.} and understand the \\ti{special} that \\ti{Alyssa enjoyed} in the second sentence refers back to the first sentence. Based on the fact that \\ti{catfish} modifies \\ti{special}, the answer is hence \\ti{catfish}.\n    \\item\n        The last question is especially challenging. To arrive at the correct answer, the machines have to keep track of all the names of people mentioned in the text and their relationships, perform some arithmetic reasoning (counting), and finally give the answer \\ti{3}.\n\\end{enumerate}\n\nAs we can see, our computer systems have to understand many different aspects of text to answer these questions correctly. Since questions can be designed to query the aspects that we care about, \\ti{reading comprehension could be the most suitable task for evaluating language understanding}. This is a central theme of this thesis.\n\nIn this thesis, we study the problem of reading comprehension: how can we build computer systems to read a passage and answer these comprehension questions? In particular, we focus on \\tf{neural reading comprehension}, a class of reading comprehension models built using deep neural networks, which have been proven much more effective than non-neural, feature-based models.\n\nThe field of reading comprehension has a long history --- as early as the 1970s, researchers already recognized that it is an important way to test the language understanding capabilities of computer programs~\\cite{lehnert1977process}. However, the field has been neglected for decades and only recently, it has received a great deal of attention and rapid progress has been made (see Figure~\\ref{fig:squad-progress} as an example), including our efforts that we will detail in this thesis. The recent success of reading comprehension can be attributed to two reasons: 1) the creation of large-scale supervised datasets in the form of (passage, question, answer) triples; 2) the development of neural reading comprehension models.\n\nIn this thesis, we will cover the essence of modern neural reading comprehension: the formulation of the problem, the building blocks and key ingredients of these systems, and understanding of where current neural reading comprehension systems can excel and where they still lag behind.\n\n\\begin{figure}[!t]\n    \\center\n    \\includegraphics[scale=0.5]{img/google_search.pdf}\n    \\longcaption{A search result on \\sys{Google}}{\\label{fig:google-search}A search result on \\sys{Google}. It not only returns a list of search documents but gives more precise answers within the documents.}\n\\end{figure}\n\nThe second central theme of the thesis is that we deeply believe that, if we can build high-performing reading comprehension systems, \\ti{they would be a crucial technology for applications such as question answering and dialogue systems}. Indeed, these language technologies are already very relevant to our daily lives. For example, today if we enter a search query into \\sys{Google} ``How many people work at Stanford University?'' (Figure~\\ref{fig:google-search}), \\sys{Google} not only returns a list of search documents, but also attempts to read these Web documents and finally highlight the most plausible answers and display them at the top of the search results. We believe this is exactly where reading comprehension can help and thus can facilitate more intelligent search engines. Additionally, with the development of digital personal assistants such as Amazon's \\sys{Alexa}, Apple's \\sys{Siri}, \\sys{Google Assistant} or Microsoft's \\sys{Cortana}, more and more users engage with these devices by having conversations and asking informational questions.\\footnote{A recent study \\href{https://www.stonetemple.com/digital-personal-assistants-study/}{https://www.stonetemple.com/digital-personal-assistants-study/} reported that asking general questions is indeed the number one use for such digital personal assistants.} We believe that building machines which are able to read and comprehend text will also greatly improve the capabilities of these personal assistants.\n\n\\begin{figure}[!h]\n\\small\n\\center\n\\begin{tabular}{p{0.85\\columnwidth}}\n\\midrule\nFort Lauderdale, Florida (CNN) -- Just taking a sip of water or walking to the bathroom is excruciatingly painful for 15-year-old Michael Brewer, who was burned over 65 percent of his body after being set on fire, allegedly by a group of teenagers. \\\\\n``It hurts my heart to see him in pain, but it enlightens at the same time to know my son is strong enough to make it through on a daily basis,'' his mother, Valerie Brewer, told CNN on Wednesday. \\\\\nBrewer and her husband, Michael Brewer, Sr., spoke to CNN's Tony Harris, a day after a 13-year-old boy who witnessed last month's attack publicly read a written statement: \\\\\n``I want to express my deepest sympathy to Mikey and his family,'' Jeremy Jarvis said.``I will pray for Mikey to grow stronger every day and for Mikey's speedy recovery.'' \\\\\nJarvis' older brother has been charged in the October 12 attack in Deerfield Beach, Florida. When asked about the teen's statement, Valerie Brewer -- who knows the Jarvis family -- said she ``can't focus on that.'' \\\\\n``I would really like to stay away from that because that brings negative energy to me and I don't need that right now,'' she said. \\\\\nHer son remains in guarded condition at the University of Miami's Jackson Memorial Hospital Burn Center. He suffered second- and third-degree burns over about two-thirds of his body, according to the hospital's associate director, Dr. Carl Schulman. \\\\\nThe teen faces a lifelong recovery from his injuries, Schulman told CNN's Harris.  \\\\\n\\vspace{0em}\n$Q_1$: What is the subject of the story? \\\\\n$A_1$: Michael Brewer \\\\\n\\vspace{0em}\n$Q_2$: What happened to him?\\\\\n$A_2$: He was burned \\\\\n\\vspace{0em}\n$Q_3$: How badly?\\\\\n$A_3$: Over 65\\% of his body \\\\\n\\vspace{0em}\n$Q_4$: Do we know who caused the burns?\\\\\n$A_4$: Yes \\\\\n\\bottomrule\n\\end{tabular}\n\\longcaption{A conversation from \\sys{CoQA} based on an CNN article}{\\label{fig:coqa-cnn-example} A conversation from \\sys{CoQA} based on an CNN article.}\n\\end{figure}\n\nTherefore, in this thesis, we are also interested in how we can build practical applications from the recent success of neural reading comprehension. We explore two research directions which employ neural reading comprehension as a key component:\n\\begin{description}\n    \\item \\tf{Open-domain question answering} combines the challenges from both information retrieval and reading comprehension and aims to answer general questions from either the Web or a large encyclopedia (e.g., Wikipedia).\n    \\item \\tf{Conversational question answering} combines the challenges from dialogue and reading comprehension, and tackles the problem of multi-turn question answering over a passage of text, like how users would engage with conversational agents. Figure~\\ref{fig:coqa-cnn-example} demonstrates an example from our \\sys{CoQA} dataset~\\cite{reddy2019coqa}. In this example, a person can ask a series of interconnected questions based on the content of a \\sys{CNN} article.\n\\end{description}\n\n\n\n\\section{Thesis Outline}\n\nFollowing the two central themes that we just discussed, this thesis consists of two parts --- \\sys{Part I Neural Reading Comprehension: Foundations} and \\sys{Part II Neural Reading Comprehension: Applications}.\n\n\\sys{Part I} focuses on the task of reading comprehension, with an emphasis on close reading of a short paragraph so that computer systems are able to answer comprehension questions.\n\\begin{description}\n    \\item In Chapter~\\ref{chapter:rc-overview}, we first give an overview of the history and recent development of the field of reading comprehension. Next we formally define the problem formulation and its main categories. We then briefly discuss the differences of reading comprehension and general question answering.  Finally, we argue that the recent success of neural reading comprehension is driven by both large-scale datasets and neural models.\n    \\item In Chapter~\\ref{chapter:rc-models}, we present the family of neural reading comprehension models. We begin with describing non-neural, feature-based classifiers, and discuss how they differ from the end-to-end neural approaches. We then introduce a neural approach that we proposed named \\sys{the Stanford Attentive Reader} and we describe its basic building blocks and extensions. We present experimental results on two representative reading comprehension datasets: \\sys{CNN/Daily Mail} and \\sys{SQuAD}, and more importantly, we conduct an in-depth analysis of the neural models to understand better what these models have actually learned. Finally, we summarize recent advances of neural reading comprehension models in different aspects. This chapter is based on our works \\cite{chen2016thorough} and \\cite{chen2017reading}.\n    \\item In Chapter~\\ref{chapter:rc-future}, we discuss future work and open questions in this field. We first examine error cases of existing models despite their high accuracies on the current benchmarks. We then discuss future directions, in terms of both the datasets and the models. Finally, we review several important research questions in this field, which still remain as open questions and yet to be answered in the future.\n\\end{description}\n\n\\sys{Part II} views reading comprehension as an important building block for practical applications such as question answering systems and conversational agents. Detailedly,\n\\begin{description}\n    \\item In Chapter~\\ref{chapter:openqa}, we address the problem of open domain question answering as an application of reading comprehension. We discuss how we can combine a high-performing neural reading comprehension system and effective information retrieval techniques, to build a new generation of open-domain question answering systems. We describe a system we built named \\sys{DrQA}: its key components and how we create training data for it, and we then present a comprehensive evaluation on multiple question answering benchmarks. We discuss its current limitations and the future work in the end. This chapter is based on our work \\cite{chen2017reading}.\n    \\item In Chapter~\\ref{chapter:coqa}, we study the problem of conversational question answering, where a machine has to understand a text passage and answer a series of questions that appear in a conversation. We first briefly review the literature on dialogue and argue that conversational question answering is the key to building information-seeking dialogue agents. We introduce \\sys{CoQA}: a novel dataset for building \\tf{Co}nversational \\tf{Q}uestion \\tf{A}nswering systems, comprising 127k questions with answers, obtained from 8k conversations about text passages. We analyze the dataset in depth and build several competitive models on top of conversational and neural reading comprehension models and present the experimental results. We finally discuss the future work in this area. This chapter is based on our work \\cite{reddy2019coqa}.\n\\end{description}\nWe will finally conclude in Chapter~\\ref{chapter:conclusions}.\n\n\\section{Contributions}\nThe contributions of this thesis are summarized as follows:\n\\begin{itemize}\n    \\item\n        We were among the first to research neural reading comprehension. In particular, we proposed the \\sys{Stanford Attentive Reader} model, which has demonstrated superior performance on various modern reading comprehension tasks.\n    \\item\n        We made the effort to understand better what neural reading comprehension models have actually learned, and what depth of language understanding is needed to solve current tasks. We concluded that neural models are better at learning lexical matches and paraphrases compared to conventional feature-based classifiers, while the reasoning capabilities of existing systems are still rather limited.\n    \\item\n        We pioneered the research direction of employing neural reading comprehension as a core component of open domain question answering, and examined how to generalize the model for this case. In particular, we implemented this idea in the \\sys{DrQA} system, a large-scale, factoid question answering system over English Wikipedia.\n    \\item\n        Finally, we set out to tackle the conversational question answering problem, in which computer systems need to answer comprehension questions in a dialogue context, so each question needs to be understood with its conversation history. To tackle this, we proposed the \\sys{CoQA} challenge and also built neural reading comprehension models adapted to this problem. We believe that this is a first but important step to building conversational QA agents.\n\\end{itemize}\n"
  },
  {
    "path": "macros.tex",
    "content": "% (tweaks)\n\\definecolor{darkred}{rgb}{0.5451, 0.0, 0.0}\n\\definecolor{darkgreen}{rgb}{0.0, 0.3922, 0.0}\n\n\\def\\blue#1{\\textcolor{blue}{#1}}\n\\def\\darkblue#1{\\textcolor{blue}{#1}}\n\\def\\red#1{\\textcolor{red}{#1}}\n\\def\\darkred#1{\\textcolor{darkred}{#1}}\n\\def\\green#1{\\textcolor{green}{#1}}\n\\def\\darkgreen#1{\\textcolor{darkgreen}{#1}}\n\\def\\yellow#1{\\textcolor{yellow}{#1}}\n\\definecolor{burntorange}{HTML}{BF5700}\n\\def\\orange#1{\\textcolor{burntorange}{#1}}\n\\def\\gray#1{\\textcolor{gray}{#1}}\n\\def\\darkgray#1{\\textcolor{darkgray}{#1}}\n\n\\newcommand\\sys[1]{\\textsc{#1}}\n\\newcommand\\ti[1]{\\textit{#1}}\n\\newcommand\\tf[1]{\\textbf{#1}}\n\\newcommand\\mf[1]{\\mathbf{#1}}\n\\newcommand{\\indentitem}{\\setlength\\itemindent{25pt}}\n\\newcommand{\\nth}{$^{\\textrm{th}}$}\n\n\\newcommand\\denote[1]{\\ensuremath{\\llbracket\\ti{#1}\\rrbracket}}\n\n\\newcommand\\forward{\\ensuremath{\\sqsubseteq}}\n\\newcommand\\nforward{\\ensuremath{\\not\\sqsubseteq}}\n\\newcommand\\reverse{\\ensuremath{\\sqsupseteq}}\n\\newcommand\\alternate{\\ensuremath{\\downharpoonleft\\hspace{-1.25mm}\\upharpoonright}}\n\\newcommand\\cover{\\ensuremath{\\smallsmile}}\n\\newcommand\\equivalent{\\ensuremath{\\equiv}}\n\\newcommand\\negate{\\ensuremath{\\curlywedge}}\n\\newcommand\\independent{\\ensuremath{\\#}}\n\\newcommand\\tagUp[1]{#1\\ensuremath{^\\uparrow}}\n\\newcommand\\tagDown[1]{#1\\ensuremath{^\\downarrow}}\n\\newcommand\\join{\\ensuremath{\\bowtie}}\n\n\\newcommand\\h[1]{\\textbf{#1}}\n\\newcommand\\hh[1]{\\textbf{\\textcolor[rgb]{0.5,0,0}{#1}}}\n\\def\\ent#1{\\text{\\small{\\textsc{#1}}}}\n\\def\\typ#1{\\textit{#1}}\n%\\makeatletter\n%\\newcommand{\\xRightarrow}[2][]{\\ext@arrow 0359\\Rightarrowfill@{#1}{#2}}\n%\\makeatother\n\n\\def\\checkmark{\\tikz\\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;}\n\\newcommand{\\xmark}{\\textrm{\\ding{55}}}\n%\\newcommand{\\valid}{\\ensuremath{\\Rightarrow}}\n%\\newcommand{\\invalid}{\\ensuremath{\\Rightarrow\\lnot}}\n\\newcommand{\\valid}{\\ensuremath{\\Leftarrow}}\n\\newcommand{\\invalid}{\\reflectbox{\\ensuremath{\\Rightarrow\\lnot}}}\n\\newcommand{\\noop}{\\textcolor{white}{NOOP}}\n\\newcommand{\\noopTab}{\\begin{tabular}{c} \\textcolor{white}{NOOP} \\\\ \\textcolor{white}{NOOP} \\end{tabular}}\n\n\\newcommand\\true[1]{\\darkgreen{\\checkmark\\textit{#1}}}\n\\newcommand\\false[1]{\\darkred{\\xmark$~\\,$\\textit{#1}}}\n\\newcommand\\unknown[1]{?\\orange{\\textit{#1}}}\n\n\\newcommand{\\verticalcenter}[1]{\\begingroup\n\\setbox0=\\hbox{#1}%\n\\parbox{\\wd0}{\\box0}\\endgroup}\n\n\n%\n% KBP MACROS\n%\n% A variable, abstracted (e.g., x)\n\\def\\var#1{\\ensuremath{\\mathbf{{#1}}}}\n% A variable instance (e.g., Obama)\n\\def\\vari#1{\\ensuremath{#1}}\n\n\n% Aliases\n\\def\\reverb{ReVerb}\n\\def\\hydra{Stanford KBP}\n\\def\\knowbot{\\sys{Knowbot}}\n\n\n% KBP Specific\n% An entity\n\\def\\ent#1{\\text{\\small{\\textsc{#1}}}}\n\n% An extraction, e.g., \"Obama born_in Hawaii\"\n\\newcommand\\extr[3]{\\mbox{\\ent{#1}\\ $~$\\rel{#2}\\ $~$\\ent{#3}}}\n\\newcommand\\triple[3]{(\\mbox{\\ent{#1}; $~$\\rel{#2}; $~$\\ent{#3}})}\n% A clause in a logical form, e.g., \"born_in(Obama, Hawaii)\"\n\\newcommand\\clause[3]{\\mbox{\\rel{#2}\\ensuremath{(#1, #3)}}}\n\n\\newcommand\\subj[1]{\\textcolor{darkblue}{#1}}\n\\newcommand\\obj[1]{\\textcolor{burntorange}{#1}}\n\\newcommand\\rel[1]{\\textrm{#1}}\n\n\\def\\blue#1{\\textcolor{blue}{#1}}\n\\def\\red#1{\\textcolor{red}{#1}}\n\\def\\green#1{\\textcolor{green}{#1}}\n\\def\\yellow#1{\\textcolor{yellow}{#1}}\n\n\\newcommand\\posterline{\n  \\begin{center}\n    \\noindent\\rule{10cm}{0.4pt}\n  \\end{center}\n}\n\n\\newcommand\\entailmentExample[2]{\n\\vspace{0.5cm}\n\\noindent \\hspace{0.5cm}\\begin{tabular}{lp{0.80\\textwidth}}\n\\textbf{P}: & \\hspace*{-1mm}\\textit{#1} \\\\\n\\textbf{H}: & \\hspace*{-1mm}\\textit{#2}\n\\end{tabular}\n\\vspace{0.5cm}\n}\n\n\\tikzset{\n    invisible/.style={opacity=0},\n    visible on/.style={alt=#1{}{invisible}},\n    alt/.code args={<#1>#2#3}{%\n      \\alt<#1>{\\pgfkeysalso{#2}}{\\pgfkeysalso{#3}} % \\pgfkeysalso doesn't change the path\n    },\n  }\n\n\\newenvironment{lquote}{%\n  \\list{}{%\n    \\rightmargin0pt}%\n    \\item\\relax\n  }\n{\\endlist}\n\n\\newcommand\\circled[1]{\\tikz[baseline=(char.base)]{\n            \\node[shape=circle,draw=darkred,inner sep=2pt] (char) {#1};}\n}\n\\newcommand\\noncircled[1]{\\tikz[baseline=(char.base)]{\n            \\node[shape=circle,draw=white,inner sep=2pt] (char) {#1};}\n}\n\n\n\\newcommand{\\hnode}[1]{|(#1)| \\w{#1}}\n\\newcommand{\\rnode}[2]{|(#1#2)| \\w{\\textcolor{darkblue}{\\textbf{#1}}}\\textcolor{white}{#2}}\n\\newcommand{\\bnode}[2]{|(#1#2)| \\w{\\textcolor{darkblue}{\\textcolor{darkred}{\\textbf{#1}}}}\\textcolor{white}{#2}}\n\n\n\\newcommand\\longcaption[2]{\\caption[#1]{#2}}\n"
  },
  {
    "path": "preface.tex",
    "content": "%!TEX root = thesis.tex\n\n\\prefacesection{Abstract}\n\nTeaching machines to understand human language documents is one of the most elusive and long-standing challenges in Artificial Intelligence. This thesis tackles the problem of reading comprehension: how to build computer systems to read a passage of text and answer  comprehension questions. On the one hand, we think that reading comprehension is an important task for evaluating how well computer systems understand human language. On the other hand, if we can build high-performing reading comprehension systems, they would be a crucial technology for applications such as question answering and dialogue systems.\n\nIn this thesis, we focus on neural reading comprehension: a class of reading comprehension models built on top of deep neural networks. Compared to traditional sparse, hand-designed feature-based models, these end-to-end neural models have proven to be more effective in learning rich linguistic phenomena and improved performance on all the modern reading comprehension benchmarks by a large margin.\n\nThis thesis consists of two parts. In the first part, we aim to cover the essence of neural reading comprehension and present our efforts at building effective neural reading comprehension models, and more importantly, understanding what neural reading comprehension models have actually learned, and what depth of language understanding is needed to solve current tasks. We also summarize recent advances and discuss future directions and open questions in this field.\n\nIn the second part of this thesis, we investigate how we can build practical applications based on the recent success of neural reading comprehension. In particular, we pioneered two new research directions: 1) how we can combine information retrieval techniques with neural reading comprehension to tackle large-scale open-domain question answering; and 2) how we can build conversational question answering systems from current single-turn, span-based reading comprehension models. We implemented these ideas in the \\sys{DrQA} and \\sys{CoQA} projects and we demonstrate the effectiveness of these approaches. We believe that they hold great promise for future language technologies.\n"
  },
  {
    "path": "ref.bib",
    "content": "%%%%%%%%%%%%%%%%\n% Bibliography\n%%%%%%%%%%%%%%%%\n\n@string{iclr = \"International Conference on Learning Representations (ICLR)\"}\n@string{aaai = \"Conference on Artificial Intelligence (AAAI)\"}\n@string{emnlp = \"Empirical Methods in Natural Language Processing (EMNLP)\"}\n@string{acl = \"Association for Computational Linguistics (ACL)\"}\n@string{acl_demo = \"Association for Computational Linguistics (ACL): System Demonstrations\"}\n@string{aistats = \"Artificial Intelligence and Statistics (AISTATS)\"}\n@string{nips = \"Advances in Neural Information Processing Systems (NIPS)\"}\n@string{icml = \"International Conference on Machine Learning (ICML)\"}\n@string{naacl = \"North American Association for Computational Linguistics (NAACL)\"}\n@string{conll = \"Computational Natural Language Learning (CoNLL)\"}\n@string{ijcnlp = \"International Joint Conference on Natural Language Processing (IJCNLP)\"}\n@string{cvpr = \"Conference on computer vision and pattern recognition (CVPR)\"}\n@string{iccv = \"International Conference on Computer Vision (ICCV)\"}\n@string{acl_hlt = \"Association for Computational Linguistics: Human Language Technologies (ACL-HLT)\"}\n@string{jmlr = \"The Journal of Machine Learning Research (JMLR)\"}\n@string{tacl = \"Transactions of the Association of Computational Linguistics (TACL)\"}\n@string{lrec = \"International Conference on Language Resources and Evaluation (LREC)\"}\n@string{coling = \"International Conference on Computational Linguistics (COLING)\"}\n@string{cl = \"Computational Linguistics\"}\n\n@article{simmons1964indexing,\n  title={Indexing and dependency logic for answering {English} questions},\n  author={Simmons, Robert F and Klein, Sheldon and McConlogue, Keren},\n  journal={American Documentation},\n  volume={15},\n  number={3},\n  pages={196--204},\n  year={1964}\n}\n\n@phdthesis{charniak1972toward,\n  title={Toward a model of children's story comprehension},\n  author={Charniak, Eugene},\n  year={1972},\n  school={Massachusetts Institute of Technology}\n}\n\n@book{schank1977scripts,\n  title={Scripts, plans, goals and understanding: An inquiry into human knowledge structures},\n  author={Schank, Roger C and Abelson, Robert P},\n  year={1977},\n  publisher={Lawrence Erlbaum}\n}\n\n@phdthesis{lehnert1977process,\n  title={The process of question answering},\n  author={Lehnert, Wendy Grace},\n  year={1977},\n  school={Yale University}\n}\n\n@article{hochreiter1997,\n  title={Long short-term memory},\n  author={Hochreiter, Sepp and Schmidhuber, J{\\\"u}rgen},\n  journal={Neural Computation},\n  volume={9},\n  pages={1735--1780},\n  year={1997}\n}\n\n@inproceedings{kupiec1993murax,\n  title={{MURAX}: A robust linguistic approach for question answering using an on-line encyclopedia},\n  author={Kupiec, Julian},\n  booktitle={ACM SIGIR conference on Research and development in information retrieval},\n  pages={181--190},\n  year={1993}\n}\n\n@book{kintsch1998comprehension,\n  title={Comprehension: A paradigm for cognition.},\n  author={Kintsch, Walter},\n  year={1998},\n  publisher={Cambridge University Press}\n}\n\n@inproceedings{voorhees1999trec,\n  title={The {TREC-8} Question Answering Track Report},\n  author={Voorhees, Ellen M},\n  booktitle={Text {RE}trieval Conference (TREC)},\n  pages={77--82},\n  year={1999}\n}\n\n@inproceedings{hirschman1999deep,\n  title={Deep read: A reading comprehension system},\n  author={Hirschman, Lynette and Light, Marc and Breck, Eric and Burger, John D},\n  booktitle=acl,\n  pages={325--332},\n  year={1999}\n}\n\n@inproceedings{riloff2000rule,\n  title={A rule-based question answering system for reading comprehension tests},\n  author={Riloff, Ellen and Thelen, Michael},\n  booktitle={ANLP/NAACL Workshop on Reading comprehension tests as evaluation for computer-based language understanding sytems},\n  pages={13--19},\n  year={2000}\n}\n\n@inproceedings{charniak2000reading,\n  title={Reading comprehension programs in a statistical-language-processing class},\n  author={Charniak, Eugene and Altun, Yasemin and Braz, Rodrigo de Salvo and Garrett, Benjamin and Kosmala, Margaret and Moscovich, Tomer and Pang, Lixin and Pyo, Changhee and Sun, Ye and Wy, Wei and others},\n  booktitle={ANLP/NAACL Workshop on Reading comprehension tests as evaluation for computer-based language understanding sytems},\n  pages={1--5},\n  year={2000}\n}\n\n@inproceedings{moldovan2000structure,\n  title={The structure and performance of an open-domain question answering system},\n  author={Moldovan, Dan and Harabagiu, Sanda and Pasca, Marius and Mihalcea, Rada and Girju, Roxana and Goodrum, Richard and Rus, Vasile},\n  booktitle=acl,\n  pages={563--570},\n  year={2000}\n}\n\n@inproceedings{brill2002askmsr,\n  title={An analysis of the {AskMSR} question-answering system},\n  author={Brill, Eric and Dumais, Susan and Banko, Michele},\n  booktitle=emnlp,\n  pages={257--264},\n  year={2002}\n}\n\n@inproceedings{papineni2002bleu,\n  title={{BLEU}: a method for automatic evaluation of machine translation},\n  author={Papineni, Kishore and Roukos, Salim and Ward, Todd and Zhu, Wei-Jing},\n  booktitle=acl,\n  pages={311--318},\n  year={2002}\n}\n\n@inproceedings{Ahn2004using,\n  author = {Ahn, David and Jijkoun, Valentin and Mishne, Gilad and Müller, Karin and de Rijke, Maarten and Schlobach., Stefan},\n  booktitle = {Text {RE}trieval Conference (TREC)},\n  title = {Using {Wikipedia} at the {TREC} {QA} {Track}},\n  year = {2004}\n}\n\n@article{lin2004rouge,\n  title={{ROUGE}: A package for automatic evaluation of summaries},\n  author={Lin, Chin-Yew},\n  journal={Text Summarization Branches Out},\n  year={2004}\n}\n\n@inproceedings{banerjee2005meteor,\n  title={{METEOR}: An automatic metric for MT evaluation with improved correlation with human judgments},\n  author={Banerjee, Satanjeev and Lavie, Alon},\n  booktitle={ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization},\n  pages={65--72},\n  year={2005}\n}\n\n@inproceedings{buscaldi2006mining,\n  title={Mining knowledge from {Wikipedia} for the question answering task},\n  author={Buscaldi, Davide and Rosso, Paolo},\n  booktitle=lrec,\n  pages={727--730},\n  year={2006}\n}\n\n@incollection{auer2007dbpedia,\n  title={{DBpedia}: A nucleus for a web of open data},\n  author={Auer, S{\\\"o}ren and Bizer, Christian and Kobilarov, Georgi and Lehmann, Jens and Cyganiak, Richard and Ives, Zachary},\n  booktitle={The Semantic Web},\n  pages={722--735},\n  year={2007},\n  publisher={Springer}\n}\n\n@inproceedings{bollacker2008freebase,\n  title={Freebase: a collaboratively created graph database for structuring human knowledge},\n  author={Bollacker, Kurt and Evans, Colin and Paritosh, Praveen and Sturge, Tim and Taylor, Jamie},\n  booktitle={Proceedings of the 2008 ACM SIGMOD international conference on Management of data},\n  pages={1247--1250},\n  year={2008}\n}\n\n@inproceedings{mitchell2009populating,\n  title={Populating the semantic web by macro-reading internet text},\n  author={Mitchell, Tom M and Betteridge, Justin and Carlson, Andrew and Hruschka, Estevam and Wang, Richard},\n  booktitle={International Semantic Web Conference (IWSC)},\n  pages={998--1002},\n  year={2009}\n}\n\n@inproceedings{mintz2009distant,\n  author    = {Mintz, Mike  and  Bills, Steven  and  Snow, Rion  and  Jurafsky, Daniel},\n  title     = {Distant supervision for relation extraction without labeled data},\n  booktitle = acl,\n  year      = {2009},\n  pages     = {1003--1011}\n}\n\n@inproceedings{weinberger2009feature,\n  title={Feature hashing for large scale multitask learning},\n  author={Weinberger, Kilian and Dasgupta, Anirban and Langford, John and Smola, Alex and Attenberg, Josh},\n  booktitle=icml,\n  pages={1113--1120},\n  year={2009}\n}\n\n@article{wu2010adapting,\n  title={Adapting boosting for information retrieval measures},\n  author={Wu, Qiang and Burges, Christopher JC and Svore, Krysta M and Gao, Jianfeng},\n  journal={Information Retrieval},\n  volume={13},\n  number={3},\n  pages={254--270},\n  year={2010},\n  publisher={Springer}\n}\n\n@article{ferrucci2010building,\n  title={Building {Watson}: An overview of the {DeepQA} project},\n  author={Ferrucci, David and Brown, Eric and Chu-Carroll, Jennifer and Fan, James and Gondek, David and Kalyanpur, Aditya A and Lally, Adam and Murdock, J William and Nyberg, Eric and Prager, John and others},\n  journal={AI magazine},\n  volume={31},\n  number={3},\n  pages={59--79},\n  year={2010}\n}\n\n@inproceedings{krizhevsky2012imagenet,\n  title={Imagenet classification with deep convolutional neural networks},\n  author={Krizhevsky, Alex and Sutskever, Ilya and Hinton, Geoffrey E},\n  booktitle=nips,\n  pages={1097--1105},\n  year={2012}\n}\n\n@inproceedings{graves2013speech,\n  title={Speech recognition with deep recurrent neural networks},\n  author={Graves, Alex and Mohamed, Abdel-rahman and Hinton, Geoffrey},\n  booktitle={International Conference on Acoustics, Speech and Signal processing (ICASSP)},\n  pages={6645--6649},\n  year={2013}\n}\n\n@inproceedings{richardson2013mctest,\n  author    = {Richardson, Matthew  and  Burges, Christopher J.C.  and  Renshaw, Erin},\n  title     = {{MCTest}: A Challenge Dataset for the Open-Domain Machine Comprehension of Text},\n  booktitle = emnlp,\n  pages     = {193--203},\n  year      = {2013}\n}\n\n@inproceedings{berant2013semantic,\n  title={Semantic Parsing on {Freebase} from Question-Answer Pairs},\n  author={Berant, Jonathan and Chou, Andrew and Frostig, Roy and Liang, Percy},\n  booktitle=emnlp,\n  pages={1533--1544},\n  year={2013}\n}\n\n@inproceedings{mikolov2013distributed,\n  title={Distributed representations of words and phrases and their compositionality},\n  author={Mikolov, Tomas and Sutskever, Ilya and Chen, Kai and Corrado, Greg S and Dean, Jeff},\n  booktitle=nips,\n  pages={3111--3119},\n  year={2013}\n}\n\n@article{kingma2014adam,\n  title={Adam: A method for stochastic optimization},\n  author={Kingma, Diederik and Ba, Jimmy},\n  journal={arXiv preprint arXiv:1412.6980},\n  year={2014}\n}\n\n\n@inproceedings{manning2014stanford,\n  title={The {Stanford} {CoreNLP} natural language processing toolkit},\n  author={Manning, Christopher D and Surdeanu, Mihai and Bauer, John and Finkel, Jenny and Bethard, Steven J and McClosky, David},\n  booktitle=acl_demo,\n  pages={55--60},\n  year={2014}\n}\n\n@inproceedings{fader2014open,\n  title={Open Question Answering Over Curated and Extracted Knowledge Bases},\n  author={Fader, Anthony and Zettlemoyer, Luke and Etzioni, Oren},\n  booktitle={SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)},\n  year={2014}\n}\n\n@inproceedings{yao2014freebase,\n  title={Freebase {QA}: Information Extraction or Semantic Parsing?},\n  author={Yao, Xuchen and Berant, Jonathan and Van Durme, Benjamin},\n  booktitle={ACL 2014 Workshop on Semantic Parsing},\n  pages={82--86},\n  year={2014}\n}\n\n@inproceedings{berant2014modeling,\n  author    = {Berant, Jonathan  and  Srikumar, Vivek  and  Chen, Pei-Chun  and  Vander Linden, Abby  and  Harding, Brittany  and  Huang, Brad  and  Clark, Peter  and  Manning, Christopher D.},\n  title     = {Modeling Biological Processes for Reading Comprehension},\n  booktitle = emnlp,\n  year      = {2014},\n  pages     = {1499--1510}\n}\n\n@inproceedings{cho2014learning,\n  title={Learning Phrase Representations using {RNN} Encoder-Decoder for Statistical Machine Translation},\n  author={Cho, Kyunghyun and Merrienboer, Bart and Gulcehre, Caglar and Bougares, Fethi and Schwenk, Holger and Bengio, Yoshua},\n  booktitle=emnlp,\n  year={2014},\n  pages = {1724--1734}\n}\n\n@inproceedings{pennington2014glove,\n  title={Glove: Global vectors for word representation},\n  author={Pennington, Jeffrey and Socher, Richard and Manning, Christopher},\n  booktitle=emnlp,\n  pages={1532--1543},\n  year={2014}\n}\n\n@inproceedings{kim2014convolutional,\n  title={Convolutional Neural Networks for Sentence Classification},\n  author={Kim, Yoon},\n  booktitle=emnlp,\n  pages={1746--1751},\n  year={2014}\n}\n\n@article{ryu2014open,\n  title={Open domain question answering using {Wikipedia-based} knowledge model},\n  author={Ryu, Pum-Mo and Jang, Myung-Gil and Kim, Hyun-Ki},\n  journal={Information Processing \\& Management},\n  volume={50},\n  pages={683--692},\n  year={2014},\n  publisher={Elsevier}\n}\n\n@inproceedings{sutskever2014sequence,\n  title={Sequence to sequence learning with neural networks},\n  author={Sutskever, Ilya and Vinyals, Oriol and Le, Quoc V},\n  booktitle=nips,\n  pages={3104--3112},\n  year={2014}\n}\n\n@inproceedings{chelba2014one,\n  title={One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling},\n  author={Chelba, Ciprian and Mikolov, Tomas and Schuster, Mike and Ge, Qi and Brants, Thorsten and Koehn, Phillipp and Robinson, Tony},\n  booktitle={Conference of the International Speech Communication Association (Interspeech)},\n  year={2014}\n}\n\n@inproceedings{antol2015vqa,\n  title={{VQA}: Visual {Q}uestion {A}nswering},\n  author={Antol, Stanislaw and Agrawal, Aishwarya and Lu, Jiasen and Mitchell, Margaret and Batra, Dhruv and Lawrence Zitnick, C and Parikh, Devi},\n  booktitle=iccv,\n  pages={2425--2433},\n  year={2015}\n}\n\n@article{vinyals2015neural,\n\ttitle = {A Neural Conversational Model},\n\tjournal = {arXiv preprint arXiv:1506.05869},\n\tauthor = {Vinyals, Oriol and Le, Quoc},\n\tyear = {2015}\n}\n\n@inproceedings{pasupat2015compositional,\n  title={Compositional Semantic Parsing on Semi-Structured Tables},\n  author={Pasupat, Panupong and Liang, Percy},\n  booktitle=acl,\n  pages={1470--1480},\n  year={2015}\n}\n\n@inproceedings{baudivs2015modeling,\n  title={Modeling of the question answering task in the {YodaQA} system},\n  author={Baudi{\\v{s}}, Petr and {\\v{S}}ediv{\\`y}, Jan},\n  booktitle={International Conference of the Cross-Language Evaluation Forum for European Languages},\n  pages={222--228},\n  year={2015},\n  organization={Springer}\n}\n\n@inproceedings{baudivs2015yodaqa,\n  title={{YodaQA}: a modular question answering system pipeline},\n  author={Baudi{\\v{s}}, Petr},\n  booktitle={POSTER 2015---19th International Student Conference on Electrical Engineering},\n  pages={1156--1165},\n  year={2015}\n}\n\n@book{gormley2015elasticsearch,\n  title={Elasticsearch: The Definitive Guide},\n  author={Gormley, Clinton and Tong, Zachary},\n  year={2015},\n  publisher={O'Reilly Media, Inc}\n}\n\n@article{bordes2015large,\n  title={Large-scale Simple Question Answering with Memory Networks},\n  author={Bordes, Antoine and Usunier, Nicolas and Chopra, Sumit and Weston, Jason},\n  journal={arXiv preprint arXiv:1506.02075},\n  year={2015}\n}\n\n@inproceedings{weston2015memory,\n  author = {Weston, Jason and Chopra, Sumit and Bordes, Antoine},\n  title = {Memory Networks},\n  booktitle = iclr,\n  year = {2015}\n}\n\n@inproceedings{bahdanau2015neural,\n  author    = {Dzmitry Bahdanau and Kyunghyun Cho and Yoshua Bengio},\n  title     = {Neural Machine Translation by Jointly Learning to Align and Translate},\n  booktitle = iclr,\n  year={2015}\n}\n\n@inproceedings{hermann2015teaching,\n  author = {Karl Moritz Hermann and Tom\\'a\\v{s} Ko\\v{c}isk\\'y and Edward Grefenstette and Lasse Espeholt and Will Kay and Mustafa Suleyman and Phil Blunsom},\n  title = {Teaching Machines to Read and Comprehend},\n  booktitle = nips,\n  pages={1693--1701},\n  year = {2015},\n}\n\n@inproceedings{srivastava2015training,\n  title={Training very deep networks},\n  author={Srivastava, Rupesh K and Greff, Klaus and Schmidhuber, J{\\\"u}rgen},\n  booktitle=nips,\n  pages={2377--2385},\n  year={2015}\n}\n\n@inproceedings{narasimhan2015machine,\n  title={Machine comprehension with discourse relations},\n  author={Narasimhan, Karthik and Barzilay, Regina},\n  booktitle=acl,\n  volume={1},\n  pages={1253--1262},\n  year={2015}\n}\n\n@inproceedings{sachan2015learning,\n  title={Learning answer-entailing structures for machine comprehension},\n  author={Sachan, Mrinmaya and Dubey, Kumar and Xing, Eric and Richardson, Matthew},\n  booktitle=acl,\n  volume={1},\n  pages={239--249},\n  year={2015}\n}\n\n@inproceedings{wang2015machine,\n  title={Machine comprehension with syntax, frames, and semantics},\n  author={Wang, Hai and Bansal, Mohit and Gimpel, Kevin and McAllester, David},\n  booktitle=acl,\n  volume={2},\n  pages={700--706},\n  year={2015}\n}\n\n@inproceedings{luong2015effective,\n  title={Effective Approaches to Attention-based Neural Machine Translation},\n  author={Luong, Thang and Pham, Hieu and Manning, Christopher D},\n  booktitle=emnlp,\n  pages={1412--1421},\n  year={2015}\n}\n\n@inproceedings{sun2015open,\n  title={Open domain question answering via semantic enrichment},\n  author={Sun, Huan and Ma, Hao and Yih, Wen-tau and Tsai, Chen-Tse and Liu, Jingjing and Chang, Ming-Wei},\n  booktitle={International Conference on World Wide Web (WWW)},\n  pages={1045--1055},\n  year={2015}\n}\n\n@article{cho2015natural,\n  title={Natural language understanding with distributed representation},\n  author={Cho, Kyunghyun},\n  journal={arXiv preprint arXiv:1511.07916},\n  year={2015}\n}\n\n@inproceedings{he2016deep,\n  title={Deep residual learning for image recognition},\n  author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},\n  booktitle=cvpr,\n  pages={770--778},\n  year={2016}\n}\n\n@inproceedings{tapaswi2016movieqa,\n  title={{MovieQA}: Understanding stories in movies through question-answering},\n  author={Tapaswi, Makarand and Zhu, Yukun and Stiefelhagen, Rainer and Torralba, Antonio and Urtasun, Raquel and Fidler, Sanja},\n  booktitle=cvpr,\n  pages={4631--4640},\n  year={2016}\n}\n\n@inproceedings{ranzato2016sequence,\n  title={Sequence level training with recurrent neural networks},\n  author={Ranzato, Marc'Aurelio and Chopra, Sumit and Auli, Michael and Zaremba, Wojciech},\n  booktitle=iclr,\n  year={2016}\n}\n\n@article{nguyen2016ms,\n  title={{MS MARCO}: A human generated machine reading comprehension dataset},\n  author={Nguyen, Tri and Rosenberg, Mir and Song, Xia and Gao, Jianfeng and Tiwary, Saurabh and Majumder, Rangan and Deng, Li},\n  journal={arXiv preprint arXiv:1611.09268},\n  year={2016}\n}\n\n@article{lee2016learning,\n  title={Learning recurrent span representations for extractive question answering},\n  author={Lee, Kenton and Salant, Shimi and Kwiatkowski, Tom and Parikh, Ankur and Das, Dipanjan and Berant, Jonathan},\n  journal={arXiv preprint arXiv:1611.01436},\n  year={2016}\n}\n\n\n@inproceedings{li2016diversity,\n  title={A Diversity-Promoting Objective Function for Neural Conversation Models},\n  author={Li, Jiwei and Galley, Michel and Brockett, Chris and Gao, Jianfeng and Dolan, Bill},\n  booktitle=naacl,\n  pages={110--119},\n  year={2016}\n}\n\n@article{bajgar2016embracing,\n  title={Embracing data abundance: {BookTest} dataset for reading comprehension},\n  author={Bajgar, Ondrej and Kadlec, Rudolf and Kleindienst, Jan},\n  journal={arXiv preprint arXiv:1610.00956},\n  year={2016}\n}\n\n@inproceedings{chen2016thorough,\n    title={A Thorough Examination of the {CNN/Daily Mail} Reading Comprehension Task},\n    author={Chen, Danqi and Bolton, Jason and Manning, Christopher D},\n    booktitle=acl,\n    volume={1},\n    year={2016},\n    pages = {2358--2367},\n}\n\n@inproceedings{shen2016minimum,\n  title={Minimum Risk Training for Neural Machine Translation},\n  author={Shen, Shiqi and Cheng, Yong and He, Zhongjun and He, Wei and Wu, Hua and Sun, Maosong and Liu, Yang},\n  booktitle=acl,\n  volume={1},\n  pages={1683--1692},\n  year={2016}\n}\n\n@inproceedings{gu2016incorporating,\n  author    = {Gu, Jiatao  and  Lu, Zhengdong  and  Li, Hang  and  Li, Victor O.K.},\n  title     = {Incorporating Copying Mechanism in Sequence-to-Sequence Learning},\n  booktitle = acl,\n  year      = {2016},\n  pages     = {1631--1640}\n}\n\n@inproceedings{lei2016rationalizing,\n  title={Rationalizing Neural Predictions},\n  author={Lei, Tao and Barzilay, Regina and Jaakkola, Tommi},\n  booktitle=emnlp,\n  pages={107--117},\n  year={2016}\n}\n\n@inproceedings{rajpurkar2016squad,\n  author = {Rajpurkar, Pranav  and  Zhang, Jian  and  Lopyrev, Konstantin  and  Liang, Percy},\n  booktitle = emnlp,\n  title = {{SQuAD}: 100,000+ Questions for Machine Comprehension of Text},\n  year = {2016},\n  pages = {2383--2392}\n}\n\n@inproceedings{andreas2016learning,\n  title={Learning to Compose Neural Networks for Question Answering},\n  author={Andreas, Jacob and Rohrbach, Marcus and Darrell, Trevor and Klein, Dan},\n  booktitle=naacl,\n  pages={1545--1554},\n  year={2016}\n}\n\n@inproceedings{parikh2016decomposable,\n  title={A Decomposable Attention Model for Natural Language Inference},\n  author={Parikh, Ankur and T{\\\"a}ckstr{\\\"o}m, Oscar and Das, Dipanjan and Uszkoreit, Jakob},\n  booktitle=emnlp,\n  pages={2249--2255},\n  year={2016}\n}\n\n@inproceedings{onishi2016did,\n  title={Who did What: A Large-Scale Person-Centered Cloze Dataset},\n  author={Onishi, Takeshi and Wang, Hai and Bansal, Mohit and Gimpel, Kevin and McAllester, David},\n  booktitle=emnlp,\n  pages={2230--2235},\n  year={2016}\n}\n\n@inproceedings{miller2016key,\n  title={Key-Value Memory Networks for Directly Reading Documents},\n  author={Miller, Alexander and Fisch, Adam and Dodge, Jesse and Karimi, Amir-Hossein and Bordes, Antoine and Weston, Jason},\n  booktitle=emnlp,\n  pages={1400--1409},\n  year={2016}\n}\n\n@inproceedings{liu2016not,\n  title={How {NOT} To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation},\n  author={Liu, Chia-Wei and Lowe, Ryan and Serban, Iulian and Noseworthy, Mike and Charlin, Laurent and Pineau, Joelle},\n  booktitle=emnlp,\n  pages={2122--2132},\n  year={2016}\n}\n\n@inproceedings{hill2016goldilocks,\n  title={The {Goldilocks} {Principle}: Reading Children's Books with Explicit Memory Representations},\n  author={Hill, Felix and Bordes, Antoine and Chopra, Sumit and Weston, Jason},\n  booktitle=iclr,\n  year={2016}\n}\n\n@inproceedings{hewlett2016wiki,\n  author    = {Hewlett, Daniel  and  Lacoste, Alexandre  and  Jones, Llion  and  Polosukhin, Illia  and  Fandrianto, Andrew  and  Han, Jay  and  Kelcey, Matthew  and  Berthelot, David},\n  title     = {WikiReading: A Novel Large-scale Language Understanding Task over Wikipedia},\n  booktitle = acl,\n  pages     = {1535--1545},\n  year      = {2016}\n}\n\n@inproceedings{gal2016theoretically,\n  title={A theoretically grounded application of dropout in recurrent neural networks},\n  author={Gal, Yarin and Ghahramani, Zoubin},\n  booktitle=nips,\n  pages={1019--1027},\n  year={2016}\n}\n\n@book{goldberg2017neural,\n  title={Neural network methods for natural language processing},\n  author={Goldberg, Yoav},\n  journal={Synthesis Lectures on Human Language Technologies},\n  volume={10},\n  number={1},\n  pages={1--309},\n  year={2017},\n  publisher={Morgan \\& Claypool Publishers}\n}\n\n@inproceedings{klein2017opennmt,\n  title={{OpenNMT}: Open-Source Toolkit for Neural Machine Translation},\n  author={Klein, Guillaume and Kim, Yoon and Deng, Yuntian and Senellart, Jean and Rush, Alexander},\n  journal=acl_demo,\n  pages={67--72},\n  year={2017}\n}\n\n@inproceedings{das2017visual,\n  title={Visual Dialog},\n  author={Das, Abhishek and Kottur, Satwik and Gupta, Khushi and Singh, Avi and Yadav, Deshraj and Moura, Jose MF and Parikh, Devi and Batra, Dhruv},\n  booktitle=cvpr,\n  pages={1080--1089},\n  year={2017}\n}\n\n@article{mikolov2017advances,\n  title={Advances in pre-training distributed word representations},\n  author={Mikolov, Tomas and Grave, Edouard and Bojanowski, Piotr and Puhrsch, Christian and Joulin, Armand},\n  journal={arXiv preprint arXiv:1712.09405},\n  year={2017}\n}\n\n@inproceedings{wang2017gated,\n  title={Gated self-matching networks for reading comprehension and question answering},\n  author={Wang, Wenhui and Yang, Nan and Wei, Furu and Chang, Baobao and Zhou, Ming},\n  booktitle=acl,\n  volume={1},\n  pages={189--198},\n  year={2017}\n}\n\n@inproceedings{yu2017learning,\n  title={Learning to Skim Text},\n  author={Yu, Adams Wei and Lee, Hongrae and Le, Quoc},\n  booktitle=acl,\n  volume={1},\n  pages={1880--1890},\n  year={2017}\n}\n\n@inproceedings{weissenborn2017making,\n  title={Making Neural QA as Simple as Possible but not Simpler},\n  author={Weissenborn, Dirk and Wiese, Georg and Seiffe, Laura},\n  booktitle=conll,\n  pages={271--280},\n  year={2017}\n}\n\n@inproceedings{seo2017bidirectional,\n  title={Bidirectional attention flow for machine comprehension},\n  author={Seo, Minjoon and Kembhavi, Aniruddha and Farhadi, Ali and Hajishirzi, Hannaneh},\n  booktitle=iclr,\n  year={2017}\n}\n\n@inproceedings{xiong2017dynamic,\n  title={Dynamic coattention networks for question answering},\n  author={Xiong, Caiming and Zhong, Victor and Socher, Richard},\n  booktitle=iclr,\n  year={2017}\n}\n\n@inproceedings{wang2017machine,\n  title={Machine Comprehension using {Match-LSTM} and Answer Pointer},\n  author={Wang, Shuohang and Jiang, Jing},\n  booktitle=iclr,\n  year={2017}\n}\n\n@inproceedings{chen2017reading,\n    title={Reading {Wikipedia} to Answer Open-Domain Questions},\n    author={Chen, Danqi and Fisch, Adam and Weston, Jason and Bordes, Antoine},\n    booktitle=acl,\n    volume={1},\n    year={2017},\n    pages={1870--1879}\n}\n\n@inproceedings{sugawara2017evaluation,\n  title={Evaluation metrics for machine reading comprehension: Prerequisite skills and readability},\n  author={Sugawara, Saku and Kido, Yusuke and Yokono, Hikaru and Aizawa, Akiko},\n  booktitle=acl,\n  volume={1},\n  pages={806--817},\n  year={2017}\n}\n\n@inproceedings{kembhavi2017you,\n  title={Are You Smarter Than a Sixth Grader? {Textbook} Question Answering for Multimodal Machine Comprehension},\n  author={Kembhavi, Aniruddha and Seo, Minjoon and Schwenk, Dustin and Choi, Jonghyun and Farhadi, Ali and Hajishirzi, Hannaneh},\n  booktitle=cvpr,\n  pages={5376--5384},\n  year={2017}\n}\n\n@inproceedings{see2017get,\n  title={Get to the point: Summarization with pointer-generator networks},\n  author={See, Abigail and Liu, Peter J and Manning, Christopher D},\n  booktitle=acl,\n  volume={1},\n  year={2017},\n  pages={1073--1083}\n}\n\n@inproceedings{joshi2017triviaqa,\n  title={{TriviaQA}: A large scale distantly supervised challenge dataset for reading comprehension},\n  author={Joshi, Mandar and Choi, Eunsol and Weld, Daniel S and Zettlemoyer, Luke},\n  booktitle=acl,\n  volume={1},\n  year={2017},\n  pages={1601--1611}\n}\n\n@inproceedings{iyyer2017search,\n  title={Search-based neural structured learning for sequential question answering},\n  author={Iyyer, Mohit and Yih, Wen-tau and Chang, Ming-Wei},\n  booktitle=acl,\n  volume={1},\n  pages={1821--1831},\n  year={2017}\n}\n\n@inproceedings{xie2017constituent,\n  title={A constituent-centric neural architecture for reading comprehension},\n  author={Xie, Pengtao and Xing, Eric},\n  booktitle=acl,\n  volume={1},\n  pages={1405--1414},\n  year={2017}\n}\n\n@article{dhingra2017comparative,\n  title={A comparative study of word embeddings for reading comprehension},\n  author={Dhingra, Bhuwan and Liu, Hanxiao and Salakhutdinov, Ruslan and Cohen, William W},\n  journal={arXiv preprint arXiv:1703.00993},\n  year={2017}\n}\n\n@article{dhingra2017quasar,\n  title={Quasar: Datasets for Question Answering by Search and Reading},\n  author={Dhingra, Bhuwan and Mazaitis, Kathryn and Cohen, William W},\n  journal={arXiv preprint arXiv:1707.03904},\n  year={2017}\n}\n\n@inproceedings{miller2017parlai,\n  title={{ParlAI}: A Dialog Research Software Platform},\n  author={Miller, Alexander and Feng, Will and Batra, Dhruv and Bordes, Antoine and Fisch, Adam and Lu, Jiasen and Parikh, Devi and Weston, Jason},\n  booktitle=emnlp,\n  pages={79--84},\n  year={2017}\n}\n\n@inproceedings{lai2017race,\n  title={{RACE}: Large-scale ReAding Comprehension Dataset From Examinations},\n  author={Lai, Guokun and Xie, Qizhe and Liu, Hanxiao and Yang, Yiming and Hovy, Eduard},\n  booktitle=emnlp,\n  pages={785--794},\n  year={2017}\n}\n\n@inproceedings{welbl2017crowdsourcing,\n  title={Crowdsourcing Multiple Choice Science Questions},\n  author={Welbl, Johannes and Liu, Nelson F and Gardner, Matt},\n  booktitle={3rd Workshop on Noisy User-generated Text},\n  pages={94--106},\n  year={2017}\n}\n\n@inproceedings{jia2017adversarial,\n  title={Adversarial Examples for Evaluating Reading Comprehension Systems},\n  author={Jia, Robin and Liang, Percy},\n  booktitle=emnlp,\n  pages={2021--2031},\n  year={2017}\n}\n\n@article{dunn2017searchqa,\n  title={{SearchQA}: A new {Q\\&A} dataset augmented with context from a search engine},\n  author={Dunn, Matthew and Sagun, Levent and Higgins, Mike and Guney, V Ugur and Cirik, Volkan and Cho, Kyunghyun},\n  journal={arXiv preprint arXiv:1704.05179},\n  year={2017}\n}\n\n@inproceedings{vaswani2017attention,\n  title={Attention is all you need},\n  author={Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, {\\L}ukasz and Polosukhin, Illia},\n  booktitle=nips,\n  pages={5998--6008},\n  year={2017}\n}\n\n@inproceedings{coleman2017dawnbench,\n  title={{DAWNBench}: An End-to-End Deep Learning Benchmark and Competition},\n  author={Coleman, Cody and Narayanan, Deepak and Kang, Daniel and Zhao, Tian and Zhang, Jian and Nardi, Luigi and Bailis, Peter and Olukotun, Kunle and R{\\'e}, Chris and Zaharia, Matei},\n  booktitle={NIPS ML Systems Workshop},\n  year={2017}\n}\n\n\n@inproceedings{mccann2017learned,\n  title={Learned in translation: Contextualized word vectors},\n  author={McCann, Bryan and Bradbury, James and Xiong, Caiming and Socher, Richard},\n  booktitle=nips,\n  pages={6297--6308},\n  year={2017}\n}\n\n@article{bojanowski2017enriching,\n  title={Enriching Word Vectors with Subword Information},\n  author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas},\n  journal=tacl,\n  volume={5},\n  pages={135--146},\n  year={2017}\n}\n\n@inproceedings{wang2018r,\n  title={R\\^{}3: Reinforced Reader-Ranker for Open-Domain Question Answering},\n  author={Wang, Shuohang and Yu, Mo and Guo, Xiaoxiao and Wang, Zhiguo and Klinger, Tim and Zhang, Wei and Chang, Shiyu and Tesauro, Gerald and Zhou, Bowen and Jiang, Jing},\n  booktitle=aaai,\n  year={2018}\n}\n\n@inproceedings{wang2018evidence,\n  title={Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering},\n  author={Wang, Shuohang and Yu, Mo and Jiang, Jing and Zhang, Wei and Guo, Xiaoxiao and Chang, Shiyu and Wang, Zhiguo and Klinger, Tim and Tesauro, Gerald and Campbell, Murray},\n  booktitle=iclr,\n  year={2018}\n}\n\n@inproceedings{talmor2018web,\n  title={The Web as a Knowledge-Base for Answering Complex Questions},\n  author={Talmor, Alon and Berant, Jonathan},\n  booktitle=naacl,\n  volume={1},\n  pages={641--651},\n  year={2018}\n}\n\n@inproceedings{yu2018qanet,\n  title={{QANet}: Combining Local Convolution with Global Self-Attention for Reading Comprehension},\n  author={Yu, Adams Wei and Dohan, David and Luong, Minh-Thang and Zhao, Rui and Chen, Kai and Norouzi, Mohammad and Le, Quoc V},\n  booktitle=iclr,\n  year={2018}\n}\n\n@inproceedings{peters2018deep,\n  title={Deep Contextualized Word Representations},\n  author={Peters, Matthew and Neumann, Mark and Iyyer, Mohit and Gardner, Matt and Clark, Christopher and Lee, Kenton and Zettlemoyer, Luke},\n  booktitle=naacl,\n  volume={1},\n  pages={2227--2237},\n  year={2018}\n}\n\n@inproceedings{khashabi2018looking,\n  title={Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences},\n  author={Khashabi, Daniel and Chaturvedi, Snigdha and Roth, Michael and Upadhyay, Shyam and Roth, Dan},\n  booktitle=naacl,\n  volume={1},\n  pages={252--262},\n  year={2018}\n}\n\n@inproceedings{huang2018fusionnet,\n  title={{FusionNet}: Fusing via Fully-aware Attention with Application to Machine Comprehension},\n  author={Huang, Hsin-Yuan and Zhu, Chenguang and Shen, Yelong and Chen, Weizhu},\n  booktitle=iclr,\n  year={2018}\n}\n\n@inproceedings{zhang2018personalizing,\n\ttitle = {Personalizing Dialogue Agents: {I} have a dog, do you have pets too?},\n\tbooktitle = acl,\n\tauthor = {Zhang, Saizheng and Dinan, Emily and Urbanek, Jack and Szlam, Arthur and Kiela, Douwe and Weston, Jason},\n\tyear = {2018},\n  volume={1},\n  pages={2204--2213}\n}\n\n\n@inproceedings{fan2018hierarchical,\n  title={Hierarchical Neural Story Generation},\n  author={Fan, Angela and Lewis, Mike and Dauphin, Yann},\n  booktitle=acl,\n  volume={1},\n  pages={889--898},\n  year={2018}\n}\n\n@inproceedings{rajpurkar2018know,\n  title={Know What You Don't Know: Unanswerable Questions for {SQuAD}},\n  author={Rajpurkar, Pranav and Jia, Robin and Liang, Percy},\n  booktitle=acl,\n  volume={2},\n  pages={784--789},\n  year={2018}\n}\n\n@inproceedings{chaganty2018price,\n  title={The price of debiasing automatic metrics in natural language evaluation},\n  author={Chaganty, Arun Tejasvi and Mussman, Stephen and Liang, Percy},\n  booktitle=acl,\n  volume={1},\n  pages={643--653},\n  year={2018}\n}\n\n@inproceedings{liu2018stochastic,\n  title={Stochastic answer networks for machine reading comprehension},\n  author={Liu, Xiaodong and Shen, Yelong and Duh, Kevin and Gao, Jianfeng},\n  booktitle=acl,\n  volume={1},\n  pages={1694--1704},\n  year={2018}\n}\n\n@inproceedings{lin2018denoising,\n  title={Denoising distantly supervised open-domain question answering},\n  author={Lin, Yankai and Ji, Haozhe and Liu, Zhiyuan and Sun, Maosong},\n  booktitle=acl,\n  volume={1},\n  pages={1736--1745},\n  year={2018}\n}\n\n@inproceedings{saha2018complex,\n    title = {Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph},\n    booktitle = aaai,\n    author = {Saha, Amrita and Pahuja, Vardaan and Khapra, Mitesh M. and Sankaranarayanan, Karthik and Chandar, Sarath},\n    year = {2018}\n}\n\n@inproceedings{clark2018simple,\n  title={Simple and Effective Multi-Paragraph Reading Comprehension},\n  author={Clark, Christopher and Gardner, Matt},\n  booktitle=acl,\n  volume={1},\n  pages={845-855},\n  year={2018}\n}\n\n@inproceedings{xiong2018dcn+,\n  title={{DCN+}: Mixed objective and deep residual coattention for question answering},\n  author={Xiong, Caiming and Zhong, Victor and Socher, Richard},\n  booktitle=iclr,\n  year={2018}\n}\n\n@inproceedings{seo2018neural,\n  title={Neural Speed Reading via {Skim-RNN}},\n  author={Seo, Minjoon and Min, Sewon and Farhadi, Ali and Hajishirzi, Hannaneh},\n  booktitle=iclr,\n  year={2018}\n}\n\n@inproceedings{guo2018dialog,\n  title = {Dialog-to-Action: Conversational Question Answering Over a Large-Scale Knowledge Base},\n  author = {Guo, Daya and Tang, Duyu and Duan, Nan and Zhou, Ming and Yin, Jian},\n  booktitle = nips,\n  pages = {2943--2952},\n  year = {2018}\n}\n\n@inproceedings{choi2018quac,\n\ttitle = {{QuAC}: Question Answering in Context},\n\tbooktitle = emnlp,\n\tauthor = {Choi, Eunsol and He, He and Iyyer, Mohit and Yatskar, Mark and Yih, Wen-tau and Choi, Yejin and Liang, Percy and Zettlemoyer, Luke},\n  pages={2174--2184},\n\tyear = {2018}\n}\n\n@inproceedings{saeidi2018interpretation,\n  title={Interpretation of Natural Language Rules in Conversational Machine Reading},\n  author={Saeidi, Marzieh and Bartolo, Max and Lewis, Patrick and Singh, Sameer and Rockt{\\\"a}schel, Tim and Sheldon, Mike and Bouchard, Guillaume and Riedel, Sebastian},\n  booktitle=emnlp,\n  pages={2087--2097},\n  year={2018}\n}\n\n@inproceedings{yang2018hotpotqa,\n  title={Hotpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering},\n  author={Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William and Salakhutdinov, Ruslan and Manning, Christopher D},\n  booktitle=emnlp,\n  pages={2369--2380},\n  year={2018}\n}\n\n@inproceedings{sugawara2018what,\n  author = \t{Sugawara, Saku and Inui, Kentaro and Sekine, Satoshi and Aizawa, Akiko},\n  title = \t{What Makes Reading Comprehension Questions Easier?},\n  booktitle = emnlp,\n  year = \t{2018},\n  pages = {4208--4219}\n}\n\n@inproceedings{kaushik2018how,\n  author = {Kaushik, Divyansh and Lipton, Zachary C.},\n  title = {How Much Reading Does Reading Comprehension Require? {A} Critical Investigation of Popular Benchmarks},\n  booktitle = emnlp,\n  year = \t{2018},\n  pages = \t{5010--5015}\n}\n\n@inproceedings{lei2018simple,\n  title={Simple recurrent units for highly parallelizable recurrence},\n  author={Lei, Tao and Zhang, Yu and Wang, Sida I and Dai, Hui and Artzi, Yoav},\n  booktitle=emnlp,\n  pages={4470--4481},\n  year={2018}\n}\n\n@article{kovcisky2018narrativeqa,\n  title={The {NarrativeQA} reading comprehension challenge},\n  author={Ko{\\v{c}}isk{\\`y}, Tom{\\'a}{\\v{s}} and Schwarz, Jonathan and Blunsom, Phil and Dyer, Chris and Hermann, Karl Moritz and Melis, G{\\'a}abor and Grefenstette, Edward},\n  journal=tacl,\n  volume={6},\n  pages={317--328},\n  year={2018}\n}\n\n@article{welbl2018constructing,\n  title={Constructing Datasets for Multi-hop Reading Comprehension Across Documents},\n  author={Welbl, Johannes and Stenetorp, Pontus and Riedel, Sebastian},\n  journal={Transactions of the Association for Computational Linguistics},\n  volume={6},\n  pages={287--302},\n  year={2018}\n}\n\n@article{reddy2019coqa,\n     title={{CoQA}: A Conversational Question Answering Challenge},\n     author={Reddy, Siva and Chen, Danqi and Manning, Christopher D},\n     journal={Transactions of the Association of Computational Linguistics (TACL).},\n     year={2019},\n     note={accepted pending revisions}\n}\n\n@article{raison2018weaver,\n  title={Weaver: Deep Co-Encoding of Questions and Documents for Machine Reading},\n  author={Raison, Martin and Mazar{\\'e}, Pierre-Emmanuel and Das, Rajarshi and Bordes, Antoine},\n  journal={arXiv preprint arXiv:1804.10490},\n  year={2018}\n}\n\n@article{devlin2018bert,\n  title={{BERT}: Pre-training of Deep Bidirectional Transformers for Language Understanding},\n  author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},\n  journal={arXiv preprint arXiv:1810.04805},\n  year={2018}\n}\n\n@techreport{radford2018improving,\n  title={Improving language understanding by generative pre-training},\n  author={Radford, Alec and Narasimhan, Karthik and Salimans, Tim and Sutskever, Ilya},\n  year={2018},\n  institution={OpenAI}\n}\n\n\n@article{gao2018neural,\n  title={Neural Approaches to Conversational {AI}},\n  author={Gao, Jianfeng and Galley, Michel and Li, Lihong},\n  journal={arXiv preprint arXiv:1809.08267},\n  year={2018}\n}\n\n@article{huang2018flowqa,\n  title={{FlowQA}: Grasping Flow in History for Conversational Machine Comprehension},\n  author={Huang, Hsin-Yuan and Choi, Eunsol and Yih, Wen-tau},\n  journal={arXiv preprint arXiv:1810.06683},\n  year={2018}\n}\n"
  },
  {
    "path": "std-macros.tex",
    "content": "% version 1.2 05/21/08\n\\newcommand\\sa{\\ensuremath{\\mathcal{a}}}\n\\newcommand\\sd{\\ensuremath{\\mathcal{d}}}\n\\newcommand\\se{\\ensuremath{\\mathcal{e}}}\n\\newcommand\\sg{\\ensuremath{\\mathcal{g}}}\n\\newcommand\\sh{\\ensuremath{\\mathcal{h}}}\n\\newcommand\\seye{\\ensuremath{\\mathcal{i}}}\n\\newcommand\\sj{\\ensuremath{\\mathcal{j}}}\n\\newcommand\\sk{\\ensuremath{\\mathcal{k}}}\n\\newcommand\\sm{\\ensuremath{\\mathcal{m}}}\n\\newcommand\\sn{\\ensuremath{\\mathcal{n}}}\n\\newcommand\\sq{\\ensuremath{\\mathcal{q}}}\n\\newcommand\\sr{\\ensuremath{\\mathcal{r}}}\n\\newcommand\\su{\\ensuremath{\\mathcal{u}}}\n\\newcommand\\sv{\\ensuremath{\\mathcal{v}}}\n\\newcommand\\sw{\\ensuremath{\\mathcal{w}}}\n\\newcommand\\sx{\\ensuremath{\\mathcal{x}}}\n\\newcommand\\sy{\\ensuremath{\\mathcal{y}}}\n\\newcommand\\sz{\\ensuremath{\\mathcal{z}}}\n\\newcommand\\sA{\\ensuremath{\\mathcal{A}}}\n\\newcommand\\sB{\\ensuremath{\\mathcal{B}}}\n\\newcommand\\sC{\\ensuremath{\\mathcal{C}}}\n\\newcommand\\sD{\\ensuremath{\\mathcal{D}}}\n\\newcommand\\sE{\\ensuremath{\\mathcal{E}}}\n\\newcommand\\sF{\\ensuremath{\\mathcal{F}}}\n\\newcommand\\sG{\\ensuremath{\\mathcal{G}}}\n\\newcommand\\sH{\\ensuremath{\\mathcal{H}}}\n\\newcommand\\sI{\\ensuremath{\\mathcal{I}}}\n\\newcommand\\sJ{\\ensuremath{\\mathcal{J}}}\n\\newcommand\\sK{\\ensuremath{\\mathcal{K}}}\n\\newcommand\\sL{\\ensuremath{\\mathcal{L}}}\n\\newcommand\\sM{\\ensuremath{\\mathcal{M}}}\n\\newcommand\\sN{\\ensuremath{\\mathcal{N}}}\n\\newcommand\\sO{\\ensuremath{\\mathcal{O}}}\n\\newcommand\\sP{\\ensuremath{\\mathcal{P}}}\n\\newcommand\\sQ{\\ensuremath{\\mathcal{Q}}}\n\\newcommand\\sR{\\ensuremath{\\mathcal{R}}}\n\\newcommand\\sS{\\ensuremath{\\mathcal{S}}}\n\\newcommand\\sT{\\ensuremath{\\mathcal{T}}}\n\\newcommand\\sU{\\ensuremath{\\mathcal{U}}}\n\\newcommand\\sV{\\ensuremath{\\mathcal{V}}}\n\\newcommand\\sW{\\ensuremath{\\mathcal{W}}}\n\\newcommand\\sX{\\ensuremath{\\mathcal{X}}}\n\\newcommand\\sY{\\ensuremath{\\mathcal{Y}}}\n\\newcommand\\sZ{\\ensuremath{\\mathcal{Z}}}\n\\newcommand\\ba{\\ensuremath{\\mathbf{a}}}\n\\newcommand\\bb{\\ensuremath{\\mathbf{b}}}\n\\newcommand\\bc{\\ensuremath{\\mathbf{c}}}\n\\newcommand\\bd{\\ensuremath{\\mathbf{d}}}\n\\newcommand\\be{\\ensuremath{\\mathbf{e}}}\n\\newcommand\\bef{\\ensuremath{\\mathbf{f}}}\n\\newcommand\\bg{\\ensuremath{\\mathbf{g}}}\n\\newcommand\\bh{\\ensuremath{\\mathbf{h}}}\n\\newcommand\\bi{\\ensuremath{\\mathbf{i}}}\n\\newcommand\\bj{\\ensuremath{\\mathbf{j}}}\n\\newcommand\\bk{\\ensuremath{\\mathbf{k}}}\n\\newcommand\\bl{\\ensuremath{\\mathbf{l}}}\n\\newcommand\\bn{\\ensuremath{\\mathbf{n}}}\n\\newcommand\\bo{\\ensuremath{\\mathbf{o}}}\n\\newcommand\\bp{\\ensuremath{\\mathbf{p}}}\n\\newcommand\\bq{\\ensuremath{\\mathbf{q}}}\n\\newcommand\\br{\\ensuremath{\\mathbf{r}}}\n\\newcommand\\bs{\\ensuremath{\\mathbf{s}}}\n\\newcommand\\bt{\\ensuremath{\\mathbf{t}}}\n\\newcommand\\bu{\\ensuremath{\\mathbf{u}}}\n\\newcommand\\bv{\\ensuremath{\\mathbf{v}}}\n\\newcommand\\bw{\\ensuremath{\\mathbf{w}}}\n\\newcommand\\bx{\\ensuremath{\\mathbf{x}}}\n\\newcommand\\by{\\ensuremath{\\mathbf{y}}}\n\\newcommand\\bz{\\ensuremath{\\mathbf{z}}}\n\\newcommand\\bA{\\ensuremath{\\mathbf{A}}}\n\\newcommand\\bB{\\ensuremath{\\mathbf{B}}}\n\\newcommand\\bC{\\ensuremath{\\mathbf{C}}}\n\\newcommand\\bD{\\ensuremath{\\mathbf{D}}}\n\\newcommand\\bE{\\ensuremath{\\mathbf{E}}}\n\\newcommand\\bF{\\ensuremath{\\mathbf{F}}}\n\\newcommand\\bG{\\ensuremath{\\mathbf{G}}}\n\\newcommand\\bH{\\ensuremath{\\mathbf{H}}}\n\\newcommand\\bI{\\ensuremath{\\mathbf{I}}}\n\\newcommand\\bJ{\\ensuremath{\\mathbf{J}}}\n\\newcommand\\bK{\\ensuremath{\\mathbf{K}}}\n\\newcommand\\bL{\\ensuremath{\\mathbf{L}}}\n\\newcommand\\bM{\\ensuremath{\\mathbf{M}}}\n\\newcommand\\bN{\\ensuremath{\\mathbf{N}}}\n\\newcommand\\bO{\\ensuremath{\\mathbf{O}}}\n\\newcommand\\bP{\\ensuremath{\\mathbf{P}}}\n\\newcommand\\bQ{\\ensuremath{\\mathbf{Q}}}\n\\newcommand\\bR{\\ensuremath{\\mathbf{R}}}\n\\newcommand\\bS{\\ensuremath{\\mathbf{S}}}\n\\newcommand\\bT{\\ensuremath{\\mathbf{T}}}\n\\newcommand\\bU{\\ensuremath{\\mathbf{U}}}\n\\newcommand\\bV{\\ensuremath{\\mathbf{V}}}\n\\newcommand\\bW{\\ensuremath{\\mathbf{W}}}\n\\newcommand\\bX{\\ensuremath{\\mathbf{X}}}\n\\newcommand\\bY{\\ensuremath{\\mathbf{Y}}}\n\\newcommand\\bZ{\\ensuremath{\\mathbf{Z}}}\n\\newcommand\\Ba{\\ensuremath{\\mathbb{a}}}\n\\newcommand\\Bb{\\ensuremath{\\mathbb{b}}}\n\\newcommand\\Bc{\\ensuremath{\\mathbb{c}}}\n\\newcommand\\Bd{\\ensuremath{\\mathbb{d}}}\n\\newcommand\\Be{\\ensuremath{\\mathbb{e}}}\n\\newcommand\\Bf{\\ensuremath{\\mathbb{f}}}\n\\newcommand\\Bg{\\ensuremath{\\mathbb{g}}}\n\\newcommand\\Bh{\\ensuremath{\\mathbb{h}}}\n\\newcommand\\Bi{\\ensuremath{\\mathbb{i}}}\n\\newcommand\\Bj{\\ensuremath{\\mathbb{j}}}\n\\newcommand\\Bk{\\ensuremath{\\mathbb{k}}}\n\\newcommand\\Bl{\\ensuremath{\\mathbb{l}}}\n\\newcommand\\Bm{\\ensuremath{\\mathbb{m}}}\n\\newcommand\\Bn{\\ensuremath{\\mathbb{n}}}\n\\newcommand\\Bo{\\ensuremath{\\mathbb{o}}}\n\\newcommand\\Bp{\\ensuremath{\\mathbb{p}}}\n\\newcommand\\Bq{\\ensuremath{\\mathbb{q}}}\n\\newcommand\\Br{\\ensuremath{\\mathbb{r}}}\n\\newcommand\\Bs{\\ensuremath{\\mathbb{s}}}\n\\newcommand\\Bt{\\ensuremath{\\mathbb{t}}}\n\\newcommand\\Bu{\\ensuremath{\\mathbb{u}}}\n\\newcommand\\Bv{\\ensuremath{\\mathbb{v}}}\n\\newcommand\\Bw{\\ensuremath{\\mathbb{w}}}\n\\newcommand\\Bx{\\ensuremath{\\mathbb{x}}}\n\\newcommand\\By{\\ensuremath{\\mathbb{y}}}\n\\newcommand\\Bz{\\ensuremath{\\mathbb{z}}}\n\\newcommand\\BA{\\ensuremath{\\mathbb{A}}}\n\\newcommand\\BB{\\ensuremath{\\mathbb{B}}}\n\\newcommand\\BC{\\ensuremath{\\mathbb{C}}}\n\\newcommand\\BD{\\ensuremath{\\mathbb{D}}}\n\\newcommand\\BE{\\ensuremath{\\mathbb{E}}}\n\\newcommand\\BF{\\ensuremath{\\mathbb{F}}}\n\\newcommand\\BG{\\ensuremath{\\mathbb{G}}}\n\\newcommand\\BH{\\ensuremath{\\mathbb{H}}}\n\\newcommand\\BI{\\ensuremath{\\mathbb{I}}}\n\\newcommand\\BJ{\\ensuremath{\\mathbb{J}}}\n\\newcommand\\BK{\\ensuremath{\\mathbb{K}}}\n\\newcommand\\BL{\\ensuremath{\\mathbb{L}}}\n\\newcommand\\BM{\\ensuremath{\\mathbb{M}}}\n\\newcommand\\BN{\\ensuremath{\\mathbb{N}}}\n\\newcommand\\BO{\\ensuremath{\\mathbb{O}}}\n\\newcommand\\BP{\\ensuremath{\\mathbb{P}}}\n\\newcommand\\BQ{\\ensuremath{\\mathbb{Q}}}\n\\newcommand\\BR{\\ensuremath{\\mathbb{R}}}\n\\newcommand\\BS{\\ensuremath{\\mathbb{S}}}\n\\newcommand\\BT{\\ensuremath{\\mathbb{T}}}\n\\newcommand\\BU{\\ensuremath{\\mathbb{U}}}\n\\newcommand\\BV{\\ensuremath{\\mathbb{V}}}\n\\newcommand\\BW{\\ensuremath{\\mathbb{W}}}\n\\newcommand\\BX{\\ensuremath{\\mathbb{X}}}\n\\newcommand\\BY{\\ensuremath{\\mathbb{Y}}}\n\\newcommand\\BZ{\\ensuremath{\\mathbb{Z}}}\n\\newcommand\\balpha{\\ensuremath{\\mbox{\\boldmath$\\alpha$}}}\n\\newcommand\\bbeta{\\ensuremath{\\mbox{\\boldmath$\\beta$}}}\n\\newcommand\\btheta{\\ensuremath{\\mbox{\\boldmath$\\theta$}}}\n\\newcommand\\bphi{\\ensuremath{\\mbox{\\boldmath$\\phi$}}}\n\\newcommand\\bpi{\\ensuremath{\\mbox{\\boldmath$\\pi$}}}\n\\newcommand\\bpsi{\\ensuremath{\\mbox{\\boldmath$\\psi$}}}\n\\newcommand\\bmu{\\ensuremath{\\mbox{\\boldmath$\\mu$}}}\n% Basic\n\\newcommand\\T{\\text}\n\\newcommand\\sign{\\text{sign}}\n\\newcommand\\tr{\\text{tr}}\n\\newcommand\\fig[1]{\\begin{center} \\includegraphics{#1} \\end{center}}\n\\newcommand\\Fig[5]{\\begin{figure}[tb] \\begin{center} \\includegraphics[scale=#2]{#1} \\end{center} \\longcaption{#4}{\\label{fig:#3} #5} \\end{figure}}\n\\newcommand\\FigTop[4]{\\begin{figure}[t] \\begin{center} \\includegraphics[scale=#2]{#1} \\end{center} \\caption{\\label{fig:#3} #4} \\end{figure}}\n\\newcommand\\FigStar[4]{\\begin{figure*}[tb] \\begin{center} \\includegraphics[scale=#2]{#1} \\end{center} \\caption{\\label{fig:#3} #4} \\end{figure*}}\n\\newcommand\\aside[1]{\\quad\\text{[#1]}}\n\\newcommand\\homework[3]{\\title{#1} \\author{#2} \\date{#3} \\maketitle}\n% Math\n\\newcommand\\argmin{\\mathop{\\text{argmin}}}\n\\newcommand\\argmax{\\mathop{\\text{argmax}}}\n\\newcommand\\p[1]{\\ensuremath{\\left( #1 \\right)}} % Parenthesis ()\n\\newcommand\\pb[1]{\\ensuremath{\\left[ #1 \\right]}} % []\n\\newcommand\\pc[1]{\\ensuremath{\\left\\{ #1 \\right\\}}} % {}\n\\newcommand\\eval[2]{\\ensuremath{\\left. #1 \\right|_{#2}}} % Evaluation\n\\newcommand\\inv[1]{\\ensuremath{\\frac{1}{#1}}}\n\\newcommand\\half{\\ensuremath{\\frac{1}{2}}}\n\\newcommand\\R{\\ensuremath{\\mathbb{R}}} % Real numbers\n\\newcommand\\Z{\\ensuremath{\\mathbb{Z}}} % Integers\n\\newcommand\\inner[2]{\\ensuremath{\\left< #1, #2 \\right>}} % Inner product\n\\newcommand\\mat[2]{\\ensuremath{\\left(\\begin{array}{#1}#2\\end{array}\\right)}} % Matrix\n\\newcommand\\eqn[1]{\\begin{eqnarray} #1 \\end{eqnarray}} % Equation (array)\n\\newcommand\\eqnl[2]{\\begin{eqnarray} \\label{eqn:#1} #2 \\end{eqnarray}} % Equation (array) with label\n\\newcommand\\eqdef{\\ensuremath{\\stackrel{\\rm def}{=}}} % Equal by definition\n%\\newcommand{\\1}{\\mathbb{I}} % Indicator (don't use \\mathbbm{1} because bbm is not TrueType though)\n\\newcommand{\\1}{\\ensuremath{\\mathbbm{1}}}\n\\newcommand{\\bone}{\\mathbf{1}} % for vector one\n\\newcommand{\\bzero}{\\mathbf{0}} % for vector zero\n\\newcommand\\refeqn[1]{(\\ref{eqn:#1})}\n\\newcommand\\refeqns[2]{(\\ref{eqn:#1}) and (\\ref{eqn:#2})}\n\\newcommand\\refchp[1]{Chapter~\\ref{chp:#1}}\n\\newcommand\\refsec[1]{Section~\\ref{sec:#1}}\n\\newcommand\\refsecs[2]{Sections~\\ref{sec:#1} and~\\ref{sec:#2}}\n\\newcommand\\reffig[1]{Figure~\\ref{fig:#1}}\n\\newcommand\\reffigs[2]{Figures~\\ref{fig:#1} and~\\ref{fig:#2}}\n\\newcommand\\reffigss[3]{Figures~\\ref{fig:#1},~\\ref{fig:#2}, and~\\ref{fig:#3}}\n\\newcommand\\reffigsss[4]{Figures~\\ref{fig:#1},~\\ref{fig:#2},~\\ref{fig:#3}, and~\\ref{fig:#4}}\n\\newcommand\\reftab[1]{Table~\\ref{tab:#1}}\n\\newcommand\\refapp[1]{Appendix~\\ref{sec:#1}}\n\\newcommand\\refthm[1]{Theorem~\\ref{thm:#1}}\n\\newcommand\\refthms[2]{Theorems~\\ref{thm:#1} and~\\ref{thm:#2}}\n\\newcommand\\reflem[1]{Lemma~\\ref{lem:#1}}\n\\newcommand\\reflems[2]{Lemmas~\\ref{lem:#1} and~\\ref{lem:#2}}\n\\newcommand\\refprop[1]{Proposition~\\ref{prop:#1}}\n\\newcommand\\refdef[1]{Definition~\\ref{def:#1}}\n\\newcommand\\refcor[1]{Corollary~\\ref{cor:#1}}\n\\newcommand\\refalg[1]{Algorithm~\\ref{alg:#1}}\n\n\\newcommand\\Chapter[2]{\\chapter{#2}\\label{chp:#1}}\n\\newcommand\\Section[2]{\\section{#2}\\label{sec:#1}}\n\\newcommand\\Subsection[2]{\\subsection{#2}\\label{sec:#1}}\n\\newcommand\\Subsubsection[2]{\\subsubsection{#2}\\label{sec:#1}}\n%\\newtheorem{definition}{Definition}\n%\\newtheorem{assumption}{Assumption}\n%\\newtheorem{proposition}{Proposition}\n%\\newtheorem{theorem}{Theorem}\n%\\newtheorem{lemma}{Lemma}\n%\\newtheorem{corollary}{Corollary}\n% Probability\n\\newcommand\\cv{\\ensuremath{\\to}} % Convergence\n\\newcommand\\cvL{\\ensuremath{\\xrightarrow{\\mathcal{L}}}} % Convergence in law\n\\newcommand\\cvd{\\ensuremath{\\xrightarrow{d}}} % Convergence in distribution\n\\newcommand\\cvP{\\ensuremath{\\xrightarrow{P}}} % Convergence in probability\n\\newcommand\\cvas{\\ensuremath{\\xrightarrow{a.s.}}} % Convergence almost surely\n\\newcommand\\eqdistrib{\\ensuremath{\\stackrel{d}{=}}} % Equal in distribution\n\\newcommand\\E[1]{\\ensuremath{\\mathbb{E}{\\left[#1\\right]}}} % Expectation\n\\newcommand\\Ex[2]{\\ensuremath{\\mathbb{E}_{#1}\\left[#2\\right]}} % Expectation\n%\\newcommand\\var{\\ensuremath{\\text{var}}} % Variance\n\\newcommand\\cov{\\ensuremath{\\text{cov}}} % Covariance\n\\newcommand\\diag{\\ensuremath{\\text{diag}}} % Diagnonal matrix\n\\newcommand\\cE[2]{\\ensuremath{\\E \\left( #1 \\mid #2 \\right)}} % Conditional expectation\n\\newcommand\\KL[2]{\\ensuremath{\\T{KL}\\left( #1 \\,||\\, #2 \\right)}} % KL-divergence\n\\newcommand\\D[2]{\\ensuremath{\\bD\\left( #1 \\,||\\, #2 \\right)}} % KL-divergence\n\n% Utilities\n\\newcommand\\lte{\\leq}\n\\newcommand\\gte{\\geq}\n\\newcommand\\lone[1]{\\ensuremath{\\|#1\\|_1}}\n\\newcommand\\ltwo[1]{\\ensuremath{\\|#1\\|_2^2}}\n\\newcommand\\naive{na\\\"{\\i}ve}\n\\newcommand\\Naive{Na\\\"{\\i}ve}\n\n% Debug\n\\usepackage{color}\n\\newcommand{\\tred}[1]{\\textcolor{red}{#1}}\n\\newcommand{\\hly}[1]{\\hl{yellow}{#1}}\n% \\def\\todo#1{\\hl{{\\bf TODO:} #1}{yellow}}\n\\def\\needcite{\\hl{{$^{\\tt\\small[citation\\ needed]}$}}{blue}}\n\\def\\needfig{\\hl{Figure X}{green}}\n\\def\\needtab{\\hl{Table Y}{green}}\n\\def\\note#1{\\hl{{\\bf NOTE:} #1}{yellow}}\n\\def\\dome{\\hl{{\\bf TODO:} write me!}{yellow}}\n"
  },
  {
    "path": "suthesis.sty",
    "content": "% Stanford University PhD thesis style -- modifications to the report style\n% This is unofficial so you should always double check against the\n% Registrar's office rules\n%\n% People are free to borrow as long as they change the name and date\n% in the \\typeout lines, the name of the file, and acknowledge the\n% work that has been done by previous people.  Ideally they should\n% comment their changes.\n\n% Original version by Joseph Pallas back in 1989\n% Modified by Emma Pease 5/7/92\n%   added singlespace environment from doublespace.sty\n%   added switches for variant title pages\n%   modified the figure environment according to changes in latex.tex\n%   corrected the signature page due to University rule changes\n%   added an optional third reader to signature page\n% Corrected a spacing problem with style changes 5/14/92 - Emma\n% Modified by Emma Pease 1/10/95\n% Modified for latex2e  5/17/95\n%   changed \\@xfloat and \\@footnotetext to reflect latex2e changes\n% Modified for latex2e 6/22/95 (Emma Pease)\n%   changed singlespace environment so it would work (taken from doublespace.sty)\n% Modified 9/8/95 (Emma Pease)\n%   removed doublespace.sty commands and explicitely inputted\n%   doublespace\n% Modified 12/17/96 (Emma Pease)\n%   added optional \\coprincipaladvisor (\\coprincipaladviser)\n% Modified 5/29/98\n%   replaced the required doublespace.sty by setspace.sty\n% Modified 8/21/98\n%   added a \\businessthesis for the school of business\n% Modified 8/22/98\n%   added a \\lawthesis\n% Modified 8/23/98\n%   spelling error in \\businessthesis def corrected\n% Modified 5/14/1999 by Emma Pease\n%   'By' dropped from title page\n% Modified 7/26/1999 by Emma Pease\n%   copyright page fixed\n% Modified 9/28/99 by Emma Pease\n%   more copyright page fixings\n% Modified 10/28/99 by Emma Pease\n%   and more copyright page fixings plus a minor mod on bibliography\n%   need to start thinking of overhauling to standard package format\n% Modified 11/26/99\n%   fixed copyrightyear so that all Fall quarter theses are next\n%   year's copyright\n\n% Modified 5/31/01 by Emma Pease\n%   fixed certification statement.  Setup for twoside option.\n% Modified 6/4/01 by Emma Pease\n%   emphasized that it is unofficial\n% Modified 8/3/01 by Emma Pease\n%   setup so that on twoside if the intro material (page numbered with\n%   roman numerals) ends on an odd page an extra blank page is included so\n%   the main body (page numbered with arabic numbers) starts on an odd\n%   absolute page  (explanation modified 5/28/02)\n% Modified 5/28/02 by Emma Pease\n%   made first and second reader optional (not that the first reader\n%   should ever be missing but someone managed to avoid a second reader)\n%   If they aren't defined, they won't appear\n% Modified 7/13/2003 by Emma Pease\n%   dropped signature line for ``Approved for the University Committee\n%   on Graduate Studies'' on signature page.  Also made sure the next\n%   section starts on an odd page if two sided.\n% Modified 11/19/03 by Emma Pease\n%   fixed the bibliography so the addcontentsline works correctly with\n%   hyperref.  Thanks to Peter Sturdza for pointing out this\n% Modified 2/14/04 by Emma Pease\n%   Changed documentation on how to change line spacing\n% Modified 6/29/04 by Emma Pease\n%   Correction to humanitiesthesis definition\n\n% Modified 11/9/2004 by Emma Pease\n%   Reformatted Signature Page to fit requirements\n%   Reformatted Title page to fit requirements\n\n% Modified 8/26/2005 by Emma Pease\n%   Modified \\language to \\languagemajor so as not to interfere with\n%   babel.\n\n% Modifed 10/31/2005 by Emma Pease\n%   added an optional fourth reader to signature page (Biology)\n%   added a length \\signaturespace\n\n% Modified 8/23/2006 by Emma Pease\n%   added () around names on signature page\n\n% Modified 5/7/2007 by Emma Pease\n%   redefined \\@endpart so that blank page after part has page number\n%   as per thesis office requirements\n\n% Modified 9/17/2008 by Emma Pease\n%   changed copyright year calculations so September theses are summer\n\n% November 2009 by Emma Pease\n%   changing to use online or hardcopy options for the new online\n%   submission possibility.\n\n% Modified May 2010 by Emma Pease\n%   added command \\onlinesignature which creates a signatue page\n%   for the online version.  This should be the last command before the\n%   \\end{document}\n\n% Modified May 2014 by Emma Pease\n%   fixed error in the signature page (Stanford University Committee not just University Committee)\n\n%%%%%\n%%%%%   PRELIMS\n%%%%%\n\n\\ProvidesPackage{suthesis-2e}[2014/05/26]\n\n\n\n%%\\typeout{Document Style Option `suthesis' for latex2e <$Date: 9/17/2008 $>.}\n\\typeout{Note that this tries to fulfill the Stanford Thesis\n  requirements but it is unofficial}\n\n% First thing we do is make sure that report has been loaded.  A\n% common error is to try to use suthesis as a documentstyle.\n\\@ifundefined{chapter}{\\@latexerr{The `suthesis' option should be used\nwith the `report' document style}{You should probably read the\nsuthesis documentation.}}{}\n\n%%%%%\n%%%%%   SETUP DOUBLESPACING\n%%%%%\n\n% include doublespace.sty for some of the stuff below\n\n\\RequirePackage{setspace}\n\n% default to hardcopy submission\n\\newif\\ifonline\n\\onlinetrue\n\\DeclareOption{online}{\\onlinetrue}\n\\DeclareOption{hardcopy}{\\onlinefalse}\n\\ProcessOptions\n\n\n% Use 1.3 times the normal baseline-to-baseline skip\n\\setstretch{1.3}\n\n\n%%%%%\n%%%%%   DOCUMENTATION\n%%%%%\n\n\\long\\def\\comment#1{}\n\\comment{\n\n  Example of use:\n    \\documentclass{report}\n\n\\usepackage{suthesis-2e}\n\\dept{Computer Science}\n\n\n    \\begin{document}\n    \\title{How to Write Theses\\\\\n            With Two Line Titles}\n    \\author{John Henry Candidate}\n    \\principaladviser{John Parker}\n    \\firstreader{John Green}\n    \\secondreader{John BigBooty}\n    \\thirdreader{Jane Supernumerary} %if needed\n    \\fourthreader{Severus Snape} %if needed\n\n    \\beforepreface\n    \\prefacesection{Preface}\n        This thesis tells you all you need to know about...\n    \\prefacesection{Acknowledgments}\n        I would like to thank...\n    \\afterpreface\n\n    \\chapter{Introduction}\n         ...\n    \\chapter{Conclusions}\n         ...\n    \\appendix\n    \\chapter{A Long Proof}\n         ...\n    \\bibliographystyle{plain}\n    \\bibliography{mybib}\n    \\end{document}\n\nDocumentation:\n    This style file modifies the standard report style to follow the\n    Graduate Degree Support Section of the Registrar's Office's\n    \"Directions for Preparing Doctoral Dissertations\".  It sets the\n    margins and interline spacing and disallows page breaks at\n    hyphens.\n\n    The \\beforepreface command creates the title page, a copyright page\n    (optionally), and a signature page.  Then the user should put\n    preface section(s), using the \\prefacesection{section title}\n    command.  The \\afterpreface command then produces the tables of\n    contents, tables and figures, and sets things up to start\n    the main body (on arabic page 1).\n\n    The following commands can control what goes in the front matter\n    material:\n\n        \\title{thesis title}\n        \\author{author's name}\n        \\dept{author's department}\n                - Computer Science if omitted\nThe following switches allow for special title pages (not all are current)\n        \\committeethesis - for a thesis in a committee (no dept.)\n                           use \\dept{committee name}\n        \\programthesis - for a thesis in a program (no dept.)\n                           use \\dept{program name}\n        \\educationthesis - for the School of Education. \\dept doesn't matter\n        \\businessthesis - for the GraduateSchool of Business. \\dept doesn't matter\n        \\lawthesis - for the School of law. \\dept doesn't matter\n        \\humanitiesthesis - for a thesis also submitted to the Graduate\n                            Program in Humanities\n        \\specialthesis  - for a Graduate Special thesis\n        \\industrialthesis - for a thesis in Industrial Engineering\n        \\dualthesis     - for a thesis in a dual language department.\n                          Also define \\languagemajor{language}.\n                          e.g., \\dept{French and Italian}\n                          \\languagemajor{Italian}\n         \\principaladviser{the principal advisor's name}\n           (or \\principaladvisor, if you prefer advisor spelled with o)\n        \\coprincipaladviser{optional second principal advisor's name}\n           (or \\coprincipaladvisor, use only if you have two principal\n           advisors only for the second one)\n        \\firstreader{the first reader's name}\n        \\secondreader{the second reader's name}\n        \\thirdreader{optional third reader's name}\n        \\fourthreader{optional fourth reader's name}\n        \\setlength{\\signaturespace}{.5in}\n                - default is .5in, can be adjusted to fit all\n                signatures in one page\n        \\submitdate{month year in which submitted to GPO}\n                - date LaTeX'd if omitted\n        \\copyrightyear{year degree conferred (next year if submitted\n          in Fall quarter)}\n                - year LaTeX'd (or next year, in December) if omitted\n        \\copyrighttrue or \\copyrightfalse\n                - produce or don't produce a copyright page (true by default)\n        \\thesiscopyrighttrue or \\thesiscopyrightfalse\n                - produces the style of copyright page listed by the\n                Thesis Office or the style that everyone else uses\n                (Thesis office by default).\n        \\figurespagetrue or \\figurespagefalse\n                - produce or don't produce a List of Figures page\n                  (true by default)\n        \\tablespagetrue or \\tablespagefalse\n                - produce or don't produce a List of Tables page\n                  (true by default)\n\nThis style uses interline spacing that is 1.3 times normal, except\nin the figure and table environments where normal spacing is used.\nThat can be changed by doing:\n    \\setstretch{1.6}\n(or whatever you want instead of 1.6)\n\nThis command should be put before the \\begin{document} command but\nafter loading the packages\n\nYou can also set any particular section in singlespacing mode by using\nthe singlespace environment.  For example\n\n\\begin{quote}\n\\begin{singlespace}\n...\n\\end{singlespace}\n\\end{quote}\n\nmakes the quote singlespaced.  See the documentation for setspace.sty\nfor more information.\n\nThe example at the beginning shows the 12pt substyle being used.  This\nseems to give acceptable looking results, but it may be omitted to get\nsmaller print.\n\n}\n\n\n\n%%%%%\n%%%%%   SETUP MARGINS AND PENALTIES NEEDED FOR STANFORD THESIS\n%%%%%\n\n% We need 1\" margins except on the binding edge, where it is 1 1/2\"\n% Theses may be either single or double sided\n  \\if@twoside\n     \\setlength\\oddsidemargin   {36.1\\p@}\n     \\setlength\\evensidemargin  {0\\p@}\n     \\setlength\\marginparwidth {40\\p@}\n  \\else\n     \\setlength\\oddsidemargin   {36.1\\p@}\n     \\setlength\\evensidemargin  {36.1\\p@}\n     \\setlength\\marginparwidth  {40\\p@}\n  \\fi\n\n\\marginparsep 10pt\n%\\oddsidemargin 0.5in \\evensidemargin 0in\n%\\marginparwidth 40pt\n\n\n\\topmargin 0pt \\headsep .5in\n\\textheight 8.1in \\textwidth 6in\n\n% Disallow page breaks at hyphens (this will give some underfull vbox's,\n% so an alternative is to use \\brokenpenalty=100 and manually search\n% for and fix such page breaks)\n\\brokenpenalty=10000\n\n%%%%%\n%%%%%   SETUP COMMANDS PECULIAR TO THESES\n%%%%%\n\n% \\author, \\title are defined in report; here are the rest of the\n% front matter defining macros\n\\def\\dept#1{\\gdef\\@dept{#1}}\n\\def\\advis@r{Adviser} % default spelling\n\\def\\principaladviser#1{\\gdef\\@principaladviser{#1}}\n\\def\\principaladvisor#1{\\gdef\\@principaladviser{#1}\\gdef\\advis@r{Advisor}}\n\\def\\coprincipaladvisor#1{\\gdef\\@coprincipaladviser{#1}\\gdef\\advis@r{Co-Advisor}}\n\\def\\coprincipaladviser#1{\\gdef\\@coprincipaladviser{#1}\\gdef\\advis@r{Co-Adviser}}\n\\def\\firstreader#1{\\gdef\\@firstreader{#1}}\n\\def\\secondreader#1{\\gdef\\@secondreader{#1}}\n\\def\\thirdreader#1{\\gdef\\@thirdreader{#1}}\n\\def\\fourthreader#1{\\gdef\\@fourthreader{#1}}\n\\def\\submitdate#1{\\gdef\\@submitdate{#1}}\n\\def\\copyrightyear#1{\\gdef\\@copyrightyear{#1}} % \\author, \\title in report\n% needed only for dual language departments to choose the language\n\\def\\languagemajor#1{\\gdef\\@languagemajor{#1}} \\def\\@language{babel}\n\\def\\jointprogram#1{\\gdef\\@jointprogram{#1}}\n\\def\\@title{}\\def\\@author{}\\def\\@dept{computer science}\n\\def\\@principaladviser{}\\def\\@firstreader{*}\\def\\@secondreader{*}\n\\def\\@coprincipaladviser{*}\n\\def\\@thirdreader{*}\n\\def\\@fourthreader{*}\n\\def\\@submitdate{\\ifcase\\the\\month\\or\n  January\\or February\\or March\\or April\\or May\\or June\\or\n  July\\or August\\or September\\or October\\or November\\or December\\fi\n  \\space \\number\\the\\year}\n% Stanford says that Fall quarter theses should have the next year as the\n% copyright year\n\\ifnum\\month>9\n    \\@tempcnta=\\year \\advance\\@tempcnta by 1\n    \\edef\\@copyrightyear{\\number\\the\\@tempcnta}\n\\else\n    \\def\\@copyrightyear{\\number\\the\\year}\n\\fi\n\\newif\\ifcopyright \\newif\\iffigurespage \\newif\\iftablespage\n\\newif\\ifthesiscopyright\n\n\n\\copyrighttrue\n\\thesiscopyrighttrue\n\n\\figurespagetrue \\tablespagetrue\n\n\n\\def\\@standardsub{submitted to the department of \\uppercase\\expandafter{\\@dept}\\\\\n                and the committee on graduate studies}\n\\def\\@standardend{}\n\n\\def\\committeethesis{\\let\\@whichsub=\\@committeesub}\n\\def\\programthesis{\\let\\@whichsub=\\@programsub}\n\\def\\educationthesis{\\let\\@whichsub=\\@educationsub}\n\\def\\businessthesis{\\let\\@whichsub=\\@businesssub}\n\\def\\lawthesis{\\let\\@whichsub=\\@lawsub}\n\\def\\humanitiesthesis{\\let\\@whichsub=\\@humanitiessub%\n\\let\\@whichend=\\@humanitiesend}\n\\def\\specialthesis{\\let\\@whichsub=\\@specialsub%\n\\let\\@whichend=\\@specialend}\n\\def\\industrialthesis{\\let\\@whichsub=\\@industrialsub%\n\\let\\@whichend=\\@industrialend}\n\\def\\dualthesis{\\let\\@whichsub=\\@dualsub%\n\\let\\@whichend=\\@dualend}\n\n\n\\def\\@committeesub{SUBMITTED TO THE COMMITTEE ON \\uppercase\\expandafter{\\@dept}\\\\\n                AND THE COMMITTEE ON GRADUATE STUDIES}\n\\def\\@programsub{SUBMITTED TO THE PROGRAM IN \\uppercase\\expandafter{\\@dept}\\\\\n                AND THE COMMITTEE ON GRADUATE STUDIES}\n\\def\\@educationsub{SUBMITTED TO THE GRADUATE SCHOOL OF EDUCATION\\\\\n                AND THE COMMITTEE ON GRADUATE STUDIES}\n\\def\\@businesssub{SUBMITTED TO THE GRADUATE SCHOOL OF BUSINESS\\\\ AND THE\n  COMMITTEE ON GRADUATE STUDIES}\n\\def\\@lawsub{SUBMITTED TO THE GRADUATE SCHOOL OF LAW\\\\\n                AND THE COMMITTEE ON GRADUATE STUDIES}\n\n\\def\\@humanitiessub{SUBMITTED TO THE DEPARTMENT OF\\\\ \\uppercase\\expandafter{\\@dept}\n                                AND THE\\\\ COMMITTEE ON\\\\ GRADUATE STUDIES}\n\\def\\@humanitiesend{\\\\IN\\\\ \\uppercase\\expandafter{\\@jointprogram} AND HUMANITIES}\n\n\\def\\@specialsub{SUBMITTED TO THE COMMITTEE ON GRADUATE STUDIES}\n\\def\\@specialend{\\\\IN\\\\ \\uppercase\\expandafter{\\@dept}}\n\n\n\\def\\@dualsub{SUBMITTED TO THE DEPARTMENT OF \\uppercase\\expandafter{\\@dept}\\\\\nAND THE COMMITTEE ON GRADUATE STUDIES}\n\\def\\@dualend{\\\\IN\\\\ \\uppercase\\expandafter{\\@languagemajor}}\n\n\n\\let\\@whichend=\\@standardend\n\\let\\@whichsub=\\@standardsub\n\n\n\\def\\titlep{%\n        \\thispagestyle{empty}%\n        \\null\\vskip1in%\n        \\begin{center}\n                \\large\\uppercase\\expandafter{\\@title}\n        \\end{center}\n        \\vfill\n        \\begin{center}\n\\large\n%                \\sc a dissertation\\\\\n%                \\lowercase\\expandafter{\\@whichsub}\\\\\n%                of stanford university\\\\\n%                in partial fulfillment of the requirements\\\\\n%                for the degree of\\\\\n%                doctor of philosophy \\uppercase\\expandafter{\\@whichend}\n                A DISSERTATION\\\\\n                \\uppercase\\expandafter{\\@whichsub}\\\\\n                OF STANFORD UNIVERSITY\\\\\n                IN PARTIAL FULFILLMENT OF THE REQUIREMENTS\\\\\n                FOR THE DEGREE OF\\\\\n                DOCTOR OF PHILOSOPHY \\uppercase\\expandafter{\\@whichend}\n        \\end{center}\n        \\vfill\n        \\begin{center}\n                \\rm \\@author\\\\\n                \\@submitdate\\\\\n        \\end{center}\\vskip.5in\\newpage}\n\n\\def\\thesiscopyrightpage{%\n        \\null\\vfill\n        \\begin{center}\n                \\large\n                \\copyright\\ Copyright\\ by \\@author\\ \\@copyrightyear\\\\\n                All Rights Reserved\n        \\end{center}\n        \\vfill\\newpage}\n\n\\def\\tradcopyrightpage{%\n        \\null\\vfill\n        \\begin{center}\n                \\large\n                Copyright\\ \\copyright\\ \\@copyrightyear\\ by \\@author\\\\\n                All Rights Reserved\n        \\end{center}\n        \\vfill\\newpage}\n\n\n\n\n\\newlength{\\signaturespace}\n\\setlength{\\signaturespace}{.5in}\n\n\n\\long\\def\\signature#1{%\n\\begin{flushright}\n\\begin{minipage}{5in}\n\\parindent=0pt\nI certify that I have read this dissertation and that, in my opinion,\nit is fully adequate in scope and quality as a dissertation for the degree\nof Doctor of Philosophy.\n\\par\n\\vspace{\\signaturespace}\n%\\hbox to 4in{\\hfil\\shortstack{\\vrule width 3in height 0.4pt\\\\ #1}}\n\\hbox to 5in{\\hfil\\begin{tabular}{@{}l@{}}\\vrule width 3in height\n    0.4pt depth 0pt\\\\ #1\\end{tabular}}\n\\end{minipage}\n\\end{flushright}}\n\n\\long\\def\\ucgssignature{%\n\\begin{flushright}\n\\begin{minipage}{5in}\n\\parindent=0pt\n\\hfill Approved for the Stanford University Committee on Graduate Studies\n\\par\n\\vspace{\\signaturespace}\n\\hbox to 5in{\\hfil\\begin{tabular}{@{}l@{}}\\vrule width 3in height\n    0.4pt depth 0pt\\end{tabular}}\n\\end{minipage}\n\\end{flushright}}\n\n\n\\def\\signaturepage{%\n\\ifonline\n\\setcounter{page}{0}\n\\def\\thepage{}\n\\thispagestyle{myheadings}\n\\markboth{\\rm \\@author}{\\rm \\@author}\\fi\n\\signature{(\\@principaladviser)\\quad Principal \\advis@r}\n  \\vfill\n% if second principal advisor\n        \\if*\\@coprincipaladviser \\else\n        \\signature{(\\@coprincipaladviser)\\quad Principal \\advis@r}\n        \\vfill\\fi\n        \\if*\\@firstreader \\else\n        \\signature{(\\@firstreader)}\n        \\vfill\\fi\n        \\if*\\@secondreader \\else\n        \\signature{(\\@secondreader)}\n        \\vfill\\fi\n% if thirdreader then do \\signature\\@thirdreader \\vfill\n        \\if*\\@thirdreader \\else\n        \\signature{(\\@thirdreader)}\n        \\vfill\\fi\n% if fourthreader then do \\signature\\@fourthreader \\vfill\n        \\if*\\@fourthreader \\else\n        \\signature{(\\@fourthreader)}\n        \\vfill\\fi\n\\ucgssignature\n}\n\n\\def\\onlinesignature{\n\\cleardoublepage\n\\@twosidetrue\n\\signaturepage\n}\n\n\\def\\beforepreface{\n        \\pagenumbering{roman}\n        \\pagestyle{plain}\n        \\titlep\n% online version has no copyright or signature pages but page counter\n% must be incremented\n% signature page should come at end\n        \\ifonline\\setcounter{page}{4}\\else\n        \\ifcopyright\\ifthesiscopyright\\thesiscopyrightpage\\else\\tradcopyrightpage\\fi\\fi\n        \\signaturepage\\fi\n        \\cleardoublepage}\n\n\n\\def\\prefacesection#1{%\n        \\chapter*{#1}\n        \\addcontentsline{toc}{chapter}{#1}}\n\n\\def\\afterpreface{\\newpage\n        \\tableofcontents\n        \\iftablespage\n                \\listoftables\n        \\fi\n        \\iffigurespage\n                \\listoffigures\n        \\fi\n        \\cleardoublepage\n        \\pagenumbering{arabic}\n        \\pagestyle{headings}}\n\n% Redefine \\thebibliography to go to a new page and put an entry in the\n% table of contents\n\\let\\@ldthebibliography\\thebibliography\n\\renewcommand{\\thebibliography}[1]{\\newpage\n                \\@ldthebibliography{#1}%\n\\addcontentsline{toc}{chapter}{\\bibname}}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%                        PART                          %\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\def\\part{\\cleardoublepage   % Starts new page.\n   \\thispagestyle{plain}%    % Page style of part page is 'plain'\n  \\if@twocolumn              % IF two-column style\n     \\onecolumn              %  THEN \\onecolumn\n     \\@tempswatrue           %       @tempswa := true\n    \\else \\@tempswafalse     %  ELSE @tempswa := false\n  \\fi                        % FI\n  \\hbox{}\\vfil               % Add fil glue to center title\n%%  \\bgroup  \\centering      % BEGIN centering %% Removed 19 Jan 88\n  \\secdef\\@part\\@spart}\n\n\\def\\@part[#1]#2{\\ifnum \\c@secnumdepth >-2\\relax  % IF secnumdepth > -2\n        \\refstepcounter{part}%                    %   THEN step\n                                                  %         part counter\n        \\addcontentsline{toc}{part}{\\thepart      %        add toc line\n        \\hspace{1em}#1}\\else                      %   ELSE add\n                                                  %         unnumb. line\n        \\addcontentsline{toc}{part}{#1}\\fi        % FI\n   \\markboth{}{}%\n   {\\centering                       % %% added 19 Jan 88\n    \\interlinepenalty \\@M            %% RmS added 11 Nov 91\n    \\ifnum \\c@secnumdepth >-2\\relax  % IF secnumdepth > -2\n      \\huge\\bfseries \\partname~\\thepart    %   THEN Print '\\partname' and\n    \\par                             %         number in \\huge bold.\n    \\vskip 20\\p@\\fi                  %        Add space before title.\n    \\Huge \\bfseries                        % FI\n    #2\\par}\\@endpart}                % Print Title in \\Huge bold.\n                                     % Bug Fix 13 Nov 89: #1 -> #2\n\n% redefine \\@endpart so the blank page after part has a page number\n\\def\\@endpart{\\vfil\\newpage\n              \\if@twoside\n               \\if@openright\n                \\null\n                \\thispagestyle{plain}%\n                \\newpage\n               \\fi\n              \\fi\n              \\if@tempswa\n                \\twocolumn\n              \\fi}\n\n\n% Start out normal\n\\pagestyle{headings}\n"
  },
  {
    "path": "thesis.tex",
    "content": "\\documentclass[12pt]{report}\n\\usepackage{suthesis}\n%\\documentstyle[12pt,suthesis]{report}\n\n% -- Imports --\n% (general libraries)\n\\usepackage{times,latexsym,amsfonts,amssymb,amsmath,graphicx,url,bbm,rotating}\n\\usepackage{multirow,hhline,stmaryrd,bussproofs,mathtools,siunitx}\n\\usepackage{booktabs,xcolor,csquotes,calligra}\n% (custom libraries)\n\\usepackage{afterpage}\n\\usepackage{longtable}\n\\usepackage{fitch}\n% (inline references)\n\\usepackage{natbib}\n\\usepackage{tabularx}\n\\usepackage[hidelinks]{hyperref}\n\\hypersetup{\n    colorlinks=true,\n    citecolor=.,\n    linkcolor=.,\n    urlcolor=blue\n}\n\n\\usepackage{epigraph}\n\\renewcommand{\\epigraphsize}{\\normalsize}\n\\setlength{\\epigraphwidth}{0.9\\textwidth}\n\n% (tikz)\n\\usepackage{soul}\n\\definecolor{light-yellow}{RGB}{255, 255, 153}\n\\sethlcolor{light-yellow}\n\\usepackage{tikz}\n\\usepackage{tikz-dependency,pifont}\n\\usetikzlibrary{shapes.arrows,chains,positioning,automata,trees,calc}\n\\usetikzlibrary{patterns,matrix}\n\\usetikzlibrary{decorations.pathmorphing,decorations.markings}\n% (print algorithms)\n\\usepackage[ruled,lined,linesnumbered]{algorithm2e}\n% (custom)\n\\input std-macros.tex\n\\input macros.tex\n\n% (paper compilation hacks)\n\\def\\newcite#1{\\citet{#1}}\n\\def\\cite#1{\\citep{#1}}\n%\\def\\newcite#1{\\textcite{#1}}\n%\\def\\cite#1{\\autocite{#1}}\n\\definecolor{darkblue}{rgb}{0.0,0.0,0.4}\n\n\n% Common hyphenations\n\\hyphenation{Text-Runner}\n\\hyphenation{Verb-Ocean}\n\\hyphenation{Raj-pur-kar}\n\n%\\bibliographystyle{plainnat}\n\n\n% Comments\n\\usepackage{xspace}\n\\usepackage{xargs} % commandx\n\\usepackage[colorinlistoftodos,prependcaption,textsize=tiny]{todonotes}\n\\usepackage{marginnote}\n\\usepackage{color}\n\\definecolor{darkgreen}{RGB}{0,100,0}\n\n% Inline comments useful for tables and figures.\n\\newcommandx{\\icmtl}[2][1=]{\\todo[inline]{DC: #2}\\xspace}\n\\newcommandx{\\icmtm}[2][1=]{\\todo[inline]{CM: #2}\\xspace}\n\n% Comments for other places.\n\\newcommandx{\\cmtl}[2][1=]{\\todo[linecolor=blue,backgroundcolor=blue!10,bordercolor=blue,#1]{DC: #2}\\xspace}\n\\newcommandx{\\cmtm}[2][1=]{\\todo[linecolor=red,backgroundcolor=red!10,bordercolor=red,#1]{CM: #2}\\xspace}\n\n\\newcommand\\cmb[1]{\\marginpar{\\tiny\\raggedright\\textcolor{blue}{\\textsf{ DC\\@: #1}}}}\n\\newcommand\\cmm[1]{\\marginpar{\\tiny\\raggedright\\textcolor{red}{\\textsf{\\bfseries CM\\@: #1}}}}\n\n\\usepackage{enumerate}\n\n\\setcounter{secnumdepth}{3}\n\n\\usepackage{footnote}\n\\makesavenoteenv{tabular}\n\\makesavenoteenv{table}\n\n\\usepackage{xpinyin}\n\n% -- Document --\n\\begin{document}\n\n% Title\n\\title{Neural Reading Comprehension and Beyond}\n\\author{Danqi Chen}\n\\principaladviser{Christopher D. Manning}\n\\firstreader{Dan Jurafsky}\n\\secondreader{Percy Liang}\n\\thirdreader{Luke Zettlemoyer}\n\n% Preface\n\\beforepreface\n\\input preface.tex\n\\input ack.tex\n\\afterpreface\n\\hypersetup{linkcolor=magenta}\n\n\n% -- Sections --\n% Introduction\n\\chapter{Introduction}\n\\label{chapter:intro}\n\\input intro.tex\n\n\\part{Neural Reading Comprehension: Foundations}\n\n\\chapter{An Overview of Reading Comprehension}\n\\label{chapter:rc-overview}\n\\input chapters/rc_overview/intro.tex\n\\input chapters/rc_overview/history.tex\n\\input chapters/rc_overview/task.tex\n\\input chapters/rc_overview/discussions.tex\n\n\\chapter{Neural Reading Comprehension Models}\n\\label{chapter:rc-models}\n\\input chapters/rc_models/intro.tex\n\\input chapters/rc_models/feature_classifier.tex\n\\input chapters/rc_models/sar.tex\n\\input chapters/rc_models/experiments.tex\n\\input chapters/rc_models/advances.tex\n\n\\chapter{The Future of Reading Comprehension}\n\\label{chapter:rc-future}\n\\input chapters/rc_future/overview.tex\n\\input chapters/rc_future/datasets.tex\n\\input chapters/rc_future/models.tex\n\\input chapters/rc_future/questions.tex\n\n\\part{Neural Reading Comprehension: Applications}\n\n\\chapter{Open Domain Question Answering}\n\\label{chapter:openqa}\n\\input chapters/openqa/intro.tex\n\\input chapters/openqa/related_work.tex\n\\input chapters/openqa/system.tex\n\\input chapters/openqa/evaluation.tex\n\\input chapters/openqa/future.tex\n% \\input chapters/openqa/future.tex\n\n\\chapter{Conversational Question Answering}\n\\label{chapter:coqa}\n\\input chapters/coqa/intro.tex\n\\input chapters/coqa/related_work.tex\n\\input chapters/coqa/dataset.tex\n\\input chapters/coqa/models.tex\n\\input chapters/coqa/experiments.tex\n\\input chapters/coqa/discussions.tex\n\n% Conclusion\n\\chapter{Conclusions}\n\\label{chapter:conclusions}\n\\input conclude.tex\n\n% Bibliography\n\\bibliographystyle{acl_natbib_nourl}\n\\bibliography{ref}\n\n\\end{document}\n"
  }
]