The euphoria is on rise. There is no seeing back, no thought of winters and no more being alone in the world of personalized and AI based bots. AlphaGo, Deepmind’s Go playing bot defeated the top Go player in the world. The Chinese board game Go, has more states than the number of atoms in the observable universe. The achievement goes down as one of the top 10 science events of 2016 alongside Climate Change, CRISPR and Donald Trump. AI and Deep Learning are changing every field in a significant way, e.g. assisting humans to find oil from the seismic traces, helping doctors in assessing radiological images for diagnosis, enabling self-driving cars, etc. Deep Learning was one of the hottest topics at RSNA ‘16, World’s largest radiology conference.
Deep Reinforcement Learning, the technology behind AlphaGo, and Generative Adversarial Networks (GANs) were the flag bearers for deep learning research in 2016. Deepmind in its pursuits of the Artificial General Intelligence, is leading from the front in advancing of AI with release of Distill – a new and interactive way to publish research for better understanding of methodologies, Differentiable Neural Computers – a form of memory-augmented neural network that it can learn to use its memory to answer complex questions, WaveNet – a generative model for raw audio, and many more awesome research releases.
In 2016, Elon Musk, Sam Altman, Ilya Sutskever and Greg Brockman came together to create OpenAI. The mission statement of OpenAI is to, “Build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible.” With the release of OpenAI Gym and Universe, OpenAI has made reinforcement learning easy for beginners, they have provided common flash games, internet games, Atari games, etc. as environments so that researchers can concentrate on creating algorithms and less time on environments. Standardizing these environments also provides us with a common platform to compare and test algorithms with others from across the globe. OpenAI has a team of few best people across the globe, including PhDs, Graduates, and few from industry as well.
Deepmind and OpenAI are leading the pack of researchers and with support of large firms such as Google, Tesla and Amazon, they have the leverage to a large hardware setup as well as huge amount of data. Evolutionary Strategies are back on rise as an alternative to policy gradients for training neural networks (provided you have access to a 700-core machine). While Deepmind is exploring how multi-agents co-operate in a reinforcement learning environment and have observed surprisingly interesting results. Both companies are focusing on how we can safely use AI, OpenAI along with Stanford and Berkeley published their work in discussing some concrete problems related to AI safety. While in London, Deepmind has created an AI safety group comprising of eminent researchers from diverse backgrounds.
In late September ‘16, the big 4 tech companies, and IBM came together to create Partnership on AI. Partnership on AI is a non-profit organization whose aim is to advance the public understanding of AI, formulate best practices and challenges of AI. The outgoing Obama administration also released a report on AI which outlines what must be the government’s role in the progress of the field. Technical aspects and research in deep learning is moving forward at a tremendous scale, but at the same time, AI in industry is growing as well but the pace is slow. Majority of AI is in backend, which does not come in direct contact with consumers, and if it comes then it is in the form of assistants. Bots to screen your resume for hiring, or AI for giving opinion for diagnosis, the number of such applications are still low. One reason is of course developing the technology to be able to put to industrial use, and the other being legal hurdles. The government does not have a strong policy yet on what legal barriers for automation are. Who is responsible if self-driving cars are involved in an accident? If my resume gets rejected by the AI screener, then who will reason me. These questions require a legal framework in place to be able to successfully address them. Discussions have started in EU on creating a legal framework for AI. MIT and Deepmind are doing some pretty interesting things to involve a common man in this process.
People will be replaced by bots, but these will be people who are doing mundane jobs like data entry which can be easily replaced or done faster using assisted AI. Same happened during Industrial Revolution. Machines came in, people got replaced, but they started doing more skilled work. Same goes for AI Revolution as well, technology won’t kick people out but will force them to move on to more skilled work, force them to upgrade their skills. The government will have a significant role to play to ensure this does not lead to more inequality, education is affordable to all, monopoly doesn’t exist, etc.
NIPS 2016 (Neural Information Processing Systems), held in December ‘16 at Barcelona, Spain was one of the most attended and famous neural network conferences. Generative Adversarial Networks (GANs), introduced in 2014 were the talk of the conference. 2016 has shown tremendous growth in GANs research and results have improved by significant margins. Learning what and where to draw and “How to train your GAN” talk by Soumith Chintala from FAIR were few hot objects in this respect. Value Iteration Networks which opened a new paradigm in RL was one of the best papers of the conference. Learning to learn by gradient descent by gradient descent was one of the other highlights of the conference because of its name as well as the idea of Meta learning that it introduces. NIPS ‘16 hosted a secretive party on launch of RocketAI, one of the most talked about startup in the conference. The best part is, that RocketAI was born in the conference, as a prank by few researchers exploiting the hype that exists in the industry and among VC circles. This medium post describes the problem with the AI hype in industry. Fraction of papers related to DL submitted on arXiv has increased exponentially in last couple of years, for e.g. for Computer Vision in CS and for Machine Learning in Stats are on rise. Karpathy in his latest blog post has given a glimpse into these trends.
And as the number of papers increase, majority of these papers are applied deep learning, where they have taken an architecture trained it on some exotic dataset and got it published. A good amount of papers try to create a new idea and say they have achieved better results than state of the art. The problem with most of them is that their experiments are on standard datasets like MNIST, which have been there since a really long time, or some other standard dataset of natural images like ImageNet or MS-COCO, where we have already passed human accuracy. Reproducibility and Statistical Significance are the key aspects in which we have to judge the papers, rather than a single accuracy number, the paper must print the error bars as well. The dataset used must be public and when comparisons are made, care must be taken that all papers with which results are compared are using exactly same dataset. For example, in diagnosing Alzheimer’s using Deep Learning, the author in the paper has compared his results with another paper, and both have used different datasets. The paper was accepted in IEEE 2016 International Conference on Image Processing, and had reported accuracies greater than 97%. These kind of numbers give false hopes to researchers and industrialists. The deep learning hype is true, but at the same time it is also impacting have multiple positive impacts and has made the community lively like never before.