I'm a machine learning scientist at Apple, working on health-related research. I got my Ph.D. in Cognitive Psychology and Computer Science from Stanford in 2020. My primary advisor was Prof. James McClelland. I have also collaborated with the Stanford NLP group and Prof. Bruce McCandliss in the School of Education.
In my Ph.D. years, I've worked on several projects in cognitive science, natural language processing, computer vision, and human-computer interaction. In my thesis work, I used deep learning methods to study multi-modal integration in mathematical cognition. I've published several papers in top-tier conferences, including NeurIPS, ACL, CHI, CogSci, etc. (see my Google Scholar page for more details).
I have interned at Google AI (Mountain View) and Microsoft Research (Cambridge, UK). During my internships, I worked on Machine Intelligence and Human-Computer Interaction.


Publications and Presentations

Arianna Yuan, Yang Li (2020). Modeling Human Visual Search Performance on Realistic Webpages Using Analytical and Deep Learning Methods. In Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (CHI 2020). [PDF] [Video]
Xiaoya Li, Yuxian Meng, Arianna Yuan, Fei Wu, Jiwei Li (2020). LAVA NAT: A Non-Autoregressive Translation Model with Look-Around Decoding and Vocabulary Attention. arXiv preprint: arXiv:2002.03084. [PDF]
Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, Jiwei Li (2020). Coreference Resolution as Query-based Span Prediction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020). [PDF]
Yuxian Meng, Xiangyuan Ren, Zijun Sun, Xiaoya Li, Arianna Yuan, Fei Wu, Jiwei Li (2019). Large-scale Pretraining for Neural Machine Translation with Tens of Billions of Sentence Pairs. arXiv preprint: arXiv:1909.11861. [PDF]
Arianna Yuan, Jay McClelland (2019). Modeling Number Sense Acquisition in A Number Board Game by Coordinating Verbal, Visual, and Grounded Action Components. In Proceedings of the 41th Annual Meeting of the Cognitive Science Society. [PDF]
Sizhu Cheng*, Arianna Yuan* (2019). Understanding the Learning Effect of Approximate Arithmetic Training: What was Actually Learned? In Proceedings of the 17th Annual Meeting of the International Conference on Cognitive Modeling. [PDF]
Yuxian Meng, Xiaoya Li, Xiaofei Sun, Qinghong Han, Arianna Yuan, Jiwei Li. Is Word Segmentation Necessary for Deep Learning of Chinese Representations? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). [PDF]
Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, Jiwei Li. Entity- Relation Extraction as Multi-turn Question Answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). [PDF]
Arianna Yuan, Will Monroe, Yu Bai and Nate Kushman (2018). Understanding the Rational Speech Act Model. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society. [PDF]
Shengjia Zhao, Hongyu Ren, Arianna Yuan, Jiaming Song, Noah Goodman, Stefano Ermon (2018). Bias and Generalization in Deep Generative Models: An Empirical Study. In Proceedings of the Thirty-Second Annual Conference on Neural Information Processing Systems (NIPS 2018). [PDF]
Arianna Yuan (2017). “So what should I do next?” – Learning to Reason through Self-Talk (in prep). [PDF]
Arianna Yuan (2017). Domain-General Learning of Neural Network Models to Solve Analogy Tasks – A Large-Scale Simulation. In Proceedings of the 39th Annual Meeting of the Cognitive Science Society. [PDF]
Arianna Yuan and Michael Henry Tessler (2017). Generating Random Sequences For You: Modeling Subjective Randomness in Competitive Games. In Proceedings of the 15th Annual Meeting of the International Conference on Cognitive Modeling. [PDF]
Arianna Yuan, Te-Lin Wu, James L. McClelland (2016). Emergence of Euclidean geometrical intuitions in hierarchical generative models. Presented at the 38th Annual Meeting of the Cognitive Science Society, Philadelphia, PA. [PDF]
Arianna Yuan (2014). A Computational Investigation of the Optimal Task Difficulty in Perceptual Learning. Presented at the 44th Annual Meeting of the Society for Neuroscience, Washington, DC.
Arianna Yuan (2012). Affective Priming Effects of Mean Facial Expressions. Presented at the 42nd Annual Meeting of the Society for Neuroscience, New Orleans, LA.
Arianna Yuan and Jay, McClelland. The Representation of Negative Numbers (in prep).
Sun, Y., and Arianna Yuan. Applications of Emotion Models Based on HMM in Mental Health Forecast. Journal of Tianjin University (Social Sciences), 13(6): 531-536, 2011.

Research Projects

Modeling Human Visual Search Performance on Realistic Webpages Using Analytical and Deep Learning Methods (CHI2020)

Modeling visual search not only offers an opportunity to predict the usability of an interface before actually testing it on real users, but also advances scientific understanding about human behavior. In this work, we first conduct a set of analyses on a large-scale dataset of visual search tasks on realistic webpages. We then present a deep neural network that learns to predict the scannability of webpage content, i.e., how easy it is for a user to find a specific target. Our model leverages both heuristic-based features such as target size and unstructured features such as raw image pixels. This approach allows us to model complex interactions that might be involved in a realistic visual search task, which can not be easily achieved by traditional analytical models. We analyze the model behavior to offer our insights into how the salience map learned by the model aligns with human intuition and how the learned semantic representation of each target type relates to its visual search performance. Read more...

CorefQA--Coreference Resolution as Query-based Span Prediction (ACL2020)

In this paper, we present an accurate and extensible approach for the coreference resolution task. We formulate the problem as a span prediction task, like in machine reading comprehension (MRC). A query is generated for each candidate mention using its surrounding context, and a span prediction module is employed to extract the text spans of the coreferences within the document using the generated query. This formulation comes with the following key advantages--(1) The span prediction strategy provides the flexibility of retrieving mentions left out at the mention proposal stage; (2) In the MRC framework, encoding the mention and its context explicitly in a query makes it possible to have a deep and thorough examination of cues embedded in the context of coreferent mentions; and (3) A plethora of existing MRC datasets can be used for data augmentation to improve the model's generalization capability. Experiments demonstrate significant performance boost over previous models, with 87.5 (+2.5) F1 score on the GAP benchmark and 83.1 (+3.5) F1 score on the CoNLL2012 benchmark. Read more...

Entity-Relation Extraction as Multi-turn Question Answering (ACL2019)

In this paper, we propose a new paradigm for the task of entity-relation extraction. We cast the task as a multi-turn question answering problem, i.e., the extraction of entities and relations is transformed to the task of identifying answer spans from the context. This multi-turn QA formalization comes with several key advantages. Firstly, the question query encodes important information for the entity/relation class we want to identify; secondly, QA provides a natural way of jointly modeling entity and relation; and thirdly, it allows us to exploit the well developed machine reading comprehension (MRC) models. Experiments on the ACE and the CoNLL04 corpora demonstrate that the proposed paradigm significantly outperforms previous best models. Additionally, we construct and will release a newly developed dataset RESUME, which requires multi-step reasoning to construct entity dependencies, as opposed to the single-step dependency extraction in the triplet exaction in previous datasets. The proposed multi-turn QA model also achieves the best performance on the RESUME dataset. Read more...

Bias and Generalization in Deep Generative Models -- An Empirical Study (NeurIPS2018)

In high dimensional settings, density estimation algorithms rely crucially on their inductive bias. Despite recent empirical success, the inductive bias of deep gen- erative models is not well understood. In this project we propose a framework to systematically investigate bias and generalization in deep generative models of images by probing the learning algorithm with carefully designed training datasets. We exactly characterize the learned distribution to study if/how the model generates novel features and novel combinations of existing features. We identify similarities to human psychology and verify that these patterns are consistent across datasets, common models and architectures. Read more...

Understanding the Rational Speech Act Model (CogSci2018)

The Rational Speech Act (RSA) model proposes that probabilistic speakers and listeners recursively reason about each other’s mental states to communicate. It has been quite successful in explaining many pragmatic reasoning phenomena. In this paper, we systematically analyzed the RSA model and found that in Monte Carlo simulations pragmatic listeners and speakers always outperform their literal counterparts and the expected accuracy increases as the number of recursions increases. Furthermore, limiting the computation resources of the speaker and listener so they sample only the top k most likely options leads to higher expected accuracy. We verified these results on a previously collected natural language dataset in color reference games. Read more...

Domain-General Learning of Neural Network Models for Analogical Reasoning (CogSci2017)

We built domain-general neural network models that learn to solve analogy tasks in different modalities (texts and images) using word representations and image representations learned from large-scale naturalistic corpus. The model reproduces key findings in the analogical reasoning literature, including relational shift and familiarity effect. Read more...

Generating Texts for Natural Language Inference (2016)

We trained Long Short-Term Memory recurrent neural networks to generate sentences that were either entailed or contradicted by given sentences. Specifically, we built a multi-tasking LSTM RNN that performed the Entailment task and the Contradiction task simultaneously and visualized these recurrent neural networks to investigate how they made logical inference in natural language. Read more...

Neural Theorem Prover (2016)

We built a neural network model to prove theorems in logical forms. Particularly, the model receives a set of axioms (premises) and a theorem to prove (goal). The model needs to select multiple axioms from the axiom list that could prove the goal theorem. We reframe it as a sequence to sequence learning problem, and use double recurrent neural networks to encode the theorem and output a sequence of axioms. Read more...

Emergence of Euclidean Geometrical Intuitions in Hierarchical Generative Models (2015)

We built a deep autoencoder to reconstruct geometric figures and analyzed the representations of deep-belief networks by visualizing the response profiles of hidden units. We found that some units demonstrate numerosity-sensitivity as the parietal neurons in the primate brain do. Read more...

Generating Random Sequences For You -- Modeling Subjective Randomness in Competitive Games (2015)

We built two probabilistic models of Theory of Mind reasoning about subjective randomness and implemented them in WebPPL. Our work suggests that the calibrated subjective randomness in competitive games can be explained by the online evaluation of sequence randomness with Theory of Mind reasoning. Read more...