Lately, I’ve been intrigued by the variety of reference genomes used across the genomics community. The picture of just how many different versions of the genome exist began to become clear as I researched what reference genome other major projects were using for this RNA-Seq pipeline RFC. I did a moderately comprehensive analysis of these in this Jupyter notebook, but this post serves as a short commentary on what I found, why I think it matters, and what the takeaways are moving forward.
- Nov 2, 2019...more
Deep Learning and Artificial General Intelligence
May 1, 2016...moreI’ve been studying and writing about deep learning (DL) for a few years now, and it still amazes the misinformation surrounding this relatively complex learning algorithm. This post is not about how deep learning is or is not over-hyped, as that is a well documented claim. Rather, it’s a jumping off point for a (hopefully) fresh, concise understanding of deep learning and its implications on artificial general intelligence (AGI). I’m going to be bold and try to make some claims on the role that this field of study will or will not play in the genesis of AGI. With all of the news on AI breakthroughs and non-industry commentators drawing rash conclusions about how deep learning will change the world, don’t we, as the deep learning community, owe it to the world to at least have our own camp in order?
Creating Your Own IPython-Like Server
Feb 18, 2016...moreLately I’ve been using Jupyter (formerly, IPython) notebooks frequently for reproducible research, and I’ve been wondering how it all works underneath the hood. Furthermore, I’ve needed some custom functionality that IPython doesn’t include by default. Instead of extending IPython, I decided I would take a stab at building my own simple IPython kernel that runs on a remote server where my GPU farm lives. I won’t be worrying about security or concurrency, since I will be the only person with access to the server. The exercise should give you an idea about how server-based coding environments work in Python.
Parametric Activation Pools Greatly Increase Performance and Consistency in ConvNets
Feb 6, 2016...moreCurrently, I’m writing my master’s thesis on the subject of malleability in deep neural networks — that is, the benefits and detriments of giving a deep neural network more trainable parameters. Obviously weights and biases are trainable parameters, but the recent development of PReLUs introduced some trainable parameters into the activation functions to produce world-class results in image recognition.
I’m be discussing some new malleable constructs in my thesis, including Momentum ReLUs, Activation Pools (APs) and Parametric Activation Pools (PAPs). However this blog post will mostly be focusing on PAPS and their performance.
Bayes Theorem for Computer Scientists
Feb 2, 2016...moreFew topics have given me as much trouch as Bayes’ theorem over the past couple of years.
I graduated with an undergraduate degree in EE (where calculus reins supreme) and was thrown into probability theory late into my MS coursework. Usually if I stare at a formula long enough, I can understand what’s going on — despite being much lower level math than what I did in EE, I just couldn’t seem to get my head around probability theory. This was especially with Bayes’ theorem. I tried many times and could never really get the idea.