Week Ending 9.20.2020

 

RESEARCH WATCH: 9.20.2020

 
ai-research.png

This week was very active for "Computer Science - Artificial Intelligence", with 184 new papers.

  • The paper discussed most in the news over the past week was by a team at DeepMind: "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess" by Nenad Tomašev et al (Sep 2020), which was referenced 8 times, including in the article AI ruined chess. Now it’s making the game beautiful again in ArsTechnica. The paper author, Vladimir Kramnik, was quoted saying "For quite a number of games on the highest level, half of the game—sometimes a full game—is played out of memory. You don't even play your own preparation; you play your computer's preparation." The paper got social media traction with 284 shares. The researchers use AlphaZero to creatively explore and design new chess variants. On Twitter, @debarghya_das observed "1/5 This chess paper from DeepMind and has absolutely consumed my mind in the last few days. They answered a question many chess players have dreamed of - how fair is chess? If you change the rules, does that change?".

  • Leading researcher Kyunghyun Cho (New York University) came out with "A Systematic Characterization of Sampling Algorithms for Open-ended Language Generation".

  • The paper shared the most on social media this week is by a team at Google: "The Hardware Lottery" by Sara Hooker (Sep 2020) with 776 shares. @charles_irl (Charles 🎉 Frye) tweeted "Imagine if consumer graphics had required efficient trees, instead of matmuls. I bet we'd all be talking about "deep tree learning" and GPTree for NLP with 175 bn branches! Really nice paper on the "Hardware Lottery" by with historical egs from Babbage to Hinton".

This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 252 new papers.

This week was active for "Computer Science - Computers and Society", with 30 new papers.

This week was very active for "Computer Science - Human-Computer Interaction", with 38 new papers.

This week was very active for "Computer Science - Learning", with 346 new papers.

  • The paper discussed most in the news over the past week was "TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices" by Alexander Wong et al (Aug 2020), which was referenced 20 times, including in the article New ABR Technology Lowers Power Consumption by 94% for Always-On Devices in Newswire.com. The paper author, Alexander Wong (University of Waterloo), was quoted saying "The key takeaways from this research is that not only can self-attention be leveraged to significantly improve the accuracy of deep neural networks, it can also have great ramifications for greatly improving efficiency and robustness of deep neural networks". The paper got social media traction with 12 shares. The researchers introduce the concept of attention condensers for building low - footprint, highly - efficient deep neural networks for on - device speech recognition on the edge. On Twitter, @sheldonfff observed "Important theoretical development from our team allowing AI to employ human-like shortcuts in the interest of efficiency. "I see only one move ahead, but it is always the correct one." – Jose Capablanca, World Chess Champion 1921-27 #darwinai".

  • Leading researcher Kyunghyun Cho (New York University) came out with "Evaluating representations by the complexity of learning low-loss predictors" @yapp1e tweeted "Evaluating representations by the complexity of learning low-loss predictors. We consider the problem of evaluating representations of data for use in solving a downstream task. We propose to measure".

  • The paper shared the most on social media this week is by a team at Google: "The Hardware Lottery" by Sara Hooker (Sep 2020)

Over the past week, 13 new papers were published in "Computer Science - Multiagent Systems".

Over the past week, 23 new papers were published in "Computer Science - Neural and Evolutionary Computing".

Over the past week, 42 new papers were published in "Computer Science - Robotics".

  • The paper discussed most in the news over the past week was "Super-Human Performance in Gran Turismo Sport Using Deep Reinforcement Learning" by Florian Fuchs et al (Aug 2020), which was referenced 6 times, including in the article A deep learning model achieves super-human performance at Gran Turismo Sport in Tech Xplore. The paper author, Yunlong Song (Researchers), was quoted saying "Autonomous driving at high speed is a challenging task that requires generating fast and precise actions even when the vehicle is approaching its physical limits". The paper got social media traction with 159 shares. The researchers consider the task of autonomous car racing in the top - selling car racing game Gran Turismo Sport. A Twitter user, @Underfox3, observed "Researchers have presented the first autonomous racing policy that achieves super-human performance in time trial settings in the Gran Turismo Sport. #DeepLearning".


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.