{"id":91,"date":"2024-12-01T13:17:28","date_gmt":"2024-12-01T05:17:28","guid":{"rendered":"https:\/\/www.averylxu.com\/?page_id=91"},"modified":"2025-02-11T12:30:24","modified_gmt":"2025-02-11T04:30:24","slug":"reading-list","status":"publish","type":"page","link":"https:\/\/www.averylxu.com\/?page_id=91","title":{"rendered":"Reading List"},"content":{"rendered":"\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<ul class=\"wp-block-list has-contrast-color has-base-background-color has-text-color has-background has-link-color has-small-font-size wp-elements-54e8feb65f20521de9050956d00b8d60\">\n<li><strong>Agent AI: Surveying the Horizons of Multimodal Interaction. <\/strong>Zane Durante, et al.&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2401.03568\">[ArXiv]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2401.03568\">[pdf]<\/a><\/li>\n\n\n\n<li><strong>The Annotated Transformer.<\/strong>&nbsp;Sasha Rush, et al.&nbsp;<a href=\"https:\/\/nlp.seas.harvard.edu\/annotated-transformer\/\">[Blog]<\/a>&nbsp;<a href=\"https:\/\/github.com\/harvardnlp\/annotated-transformer\/\">[Code]<\/a><\/li>\n\n\n\n<li><strong>The First Law of Complexodynamics.<\/strong>&nbsp;Scott Aaronson.&nbsp;<a href=\"https:\/\/scottaaronson.blog\/?p=762\">[Blog]<\/a><\/li>\n\n\n\n<li><strong>The Unreasonable Effectiveness of Recurrent Neural Networks.<\/strong>&nbsp;Andrej Karpathy.&nbsp;<a href=\"https:\/\/karpathy.github.io\/2015\/05\/21\/rnn-effectiveness\/\">[Blog]<\/a>&nbsp;<a href=\"https:\/\/github.com\/karpathy\/char-rnn\">[Code]<\/a><\/li>\n\n\n\n<li><strong>Understanding LSTM Networks.<\/strong>&nbsp;Christopher Olah.&nbsp;<a href=\"https:\/\/colah.github.io\/posts\/2015-08-Understanding-LSTMs\/\">[Blog]<\/a><\/li>\n\n\n\n<li><strong>Recurrent Neural Network Regularization.<\/strong>&nbsp;Wojciech Zaremba, et al.&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1409.2329\">[ArXiv]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1409.2329\">[pdf]<\/a>&nbsp;<a href=\"https:\/\/github.com\/wojzaremba\/lstm\">[Code]<\/a><\/li>\n\n\n\n<li><strong>Keeping Neural Networks Simple by Minimizing the Description Length of the Weights.<\/strong>&nbsp;Geoffrey E. Hinton and Drew van Camp.&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/168304.168306\">[Paper]<\/a>&nbsp;<a href=\"https:\/\/www.cs.toronto.edu\/~hinton\/absps\/colt93.pdf\">[pdf]<\/a><\/li>\n\n\n\n<li><strong>Pointer Networks.<\/strong>&nbsp;Oriol Vinyals, et al.&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/5866-pointer-networks\">[Paper]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1506.03134\">[pdf]<\/a><\/li>\n\n\n\n<li><strong>ImageNet Classification with Deep Convolutional Neural Networks.<\/strong>&nbsp;Alex Krizhevsky, et al.&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/4824-imagenet-classification-with-deep-convolutional-neural-networks\">[Paper]<\/a>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf\">[pdf]<\/a><\/li>\n\n\n\n<li><strong>Order Matters: Sequence to sequence for sets.<\/strong>&nbsp;Oriol Vinyals, et al.&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1511.06391\">[ArXiv]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1511.06391\">[pdf]<\/a><\/li>\n\n\n\n<li><strong>GPipe: Easy Scaling with Micro-Batch Pipeline Parallelism.<\/strong>&nbsp;Yanping Huang, et al.&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[ArXiv]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1811.06965\">[pdf]<\/a><\/li>\n\n\n\n<li><strong>Deep Residual Learning for Image Recognition.<\/strong>&nbsp;Kaiming He, et al. <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/1512.03385\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1512.03385\">[pdf]<\/a><\/li>\n\n\n\n<li><strong>Multi-Scale Context Aggregation by Dilated Convolutions.<\/strong>&nbsp;Fisher Yu and Vladlen Koltun. <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/1511.07122\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1511.07122\">[pdf]<\/a><\/li>\n\n\n\n<li><strong>Neural Message Passing for Quantum Chemistry.<\/strong>&nbsp;Justin Gilmer, et al. <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/1704.01212\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[pdf]<\/a><\/li>\n\n\n\n<li><strong>Attention Is All You Need.<\/strong>&nbsp;Ashish Vaswani, et al. <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/1706.03762\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1706.03762\">pdf<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n\n\n\n<li><strong>Neural Machine Translation by Jointly Learning to Align and Translate.<\/strong>&nbsp;Dzmitry Bahdanau, et al.  <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/1409.0473\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1409.0473\">pdf<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n\n\n\n<li><strong>Identity Mappings in Deep Residual Networks.<\/strong>&nbsp;Kaiming He, et al.  <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/1603.05027\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1603.05027\">pdf<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n\n\n\n<li><strong>A simple neural network module for relational reasoning.<\/strong>&nbsp;Adam Santoro, et al.  <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/1706.01427\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1706.01427\">pdf<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n\n\n\n<li><strong>Variational Lossy Autoencoder.<\/strong>&nbsp;Xi Chen, et al.  <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/1611.02731\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1611.02731\">pdf<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n\n\n\n<li><strong>Relational recurrent neural networks.<\/strong>&nbsp;Adam Santoro, et al.  <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/1806.01822\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1806.01822\">pdf<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n\n\n\n<li><strong>Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton.<\/strong>&nbsp;Scott Aaronson, et al.  <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/1405.6903\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1405.6903\">pdf<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n\n\n\n<li><strong>Neural Turing Machines.<\/strong>&nbsp;Alex Graves, et al.  <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/1410.5401\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1410.5401\">pdf<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n\n\n\n<li><strong>Deep Speech 2: End-to-End Speech Recognition in English and Mandarin.<\/strong>&nbsp;Dario Amodei, et al.  <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/1512.02595\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1512.02595\">pdf<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n\n\n\n<li><strong>Scaling Laws for Neural Language Models.<\/strong>&nbsp;Jared Kaplan, et al.  <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/2001.08361\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"https:\/\/arxiv.org\/pdf\/2001.08361\">pdf<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n\n\n\n<li><strong>A Tutorial Introduction to the Minimum Description Length Principle.<\/strong>&nbsp;Peter Grunwald.  <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/arxiv.org\/abs\/math\/0406077\">ArXiv<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"https:\/\/arxiv.org\/pdf\/math\/0406077\">pdf<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n\n\n\n<li><strong>Machine Super Intelligence.<\/strong>&nbsp;Shane Legg.  <a href=\"http:\/\/www.vetta.org\/publications\/\">[Blog]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/pdfs.semanticscholar.org\/e758\/b579456545f8691bbadaf26bcd3b536c7172.pdf\">Presentation<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"http:\/\/www.vetta.org\/documents\/Machine_Super_Intelligence.pdf\">pdf<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n\n\n\n<li><strong>CS231n: Convolutional Neural Networks for Visual Recognition.<\/strong>  <a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">[<\/a><a href=\"https:\/\/cs231n.stanford.edu\/\">Course<\/a><a href=\"https:\/\/arxiv.org\/abs\/1811.06965\">]<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">[<\/a><a href=\"https:\/\/cs231n.github.io\/\">gitHub<\/a><a href=\"https:\/\/arxiv.org\/pdf\/1704.01212\">]<\/a><\/li>\n<\/ul>\n<\/div>\n<\/div>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"parent":2,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"superbaddons-page-template","meta":{"footnotes":""},"class_list":["post-91","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/www.averylxu.com\/index.php?rest_route=\/wp\/v2\/pages\/91","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.averylxu.com\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.averylxu.com\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.averylxu.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.averylxu.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=91"}],"version-history":[{"count":18,"href":"https:\/\/www.averylxu.com\/index.php?rest_route=\/wp\/v2\/pages\/91\/revisions"}],"predecessor-version":[{"id":147,"href":"https:\/\/www.averylxu.com\/index.php?rest_route=\/wp\/v2\/pages\/91\/revisions\/147"}],"up":[{"embeddable":true,"href":"https:\/\/www.averylxu.com\/index.php?rest_route=\/wp\/v2\/pages\/2"}],"wp:attachment":[{"href":"https:\/\/www.averylxu.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=91"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}