Skip to main content
  • Expression of Interest

    New Bulgarian University

    My research is focused on using deep learning to model various cognitive functions, such as short term memory, visual object recognition and symbolic reasoning. My most recent contributions to the field include a novel way to represent symbolic structures in connectionist systems (Vankov & Bowers, in press), a systematic exploration of the degree to which convolutional neural networks support translation (Blything, Vankov, Ludwig,  & Bowers, 2019) and size invariance and a solution to the binding problems in recurrent neural networks (Slavov & Vankov, under preparation).

     

    My contribution to a project within this call may consist of research related to symbolic computation in deep neural networks (i.e. relational categorization, analogical mapping), as well as in the field of visual object recognition (for example, using partial occlusion/bubbles techinque to outline the critical regions in a visual category). I supervise a number of graduate students in the Cognive Science program at New Bulgarian University, which have experience in modeling and behavioural experimentation. I also have extensive experience in analyzing the internal states (i.e. the hidden layer activations) of neural networks.

     

     

    Selected publications:

     

     

    • Vankov, I. & Bowers, J. (in press). Training Neural Networks to Encode Symbols enables Combinatorial Generalization. Philosophical Transactions of the Royal Society B. arXiv preprint arXiv:1703.04474

    • Blything, R., Vankov, I., Ludwig, C., Bowers, J. (2019). Extreme Translation Tolerance in Humans and Machines. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience, Berlin, Germany. doi: 10.32470/CCN.2019.1091-0

    • Vankov, I. & Bowers, J. (2016). Do arbitrary input–output mappings in parallel distributed processing networks require localist coding? Language, Cognition and Neuroscience, 32(3), 392–399. doi: 10.1080/23273798.2016.1256490

    • Bowers, J., Vankov, I., Damian, M., & Davis, C. (2014). Neural networks learn highly selective representations in order to overcome the superposition catastrophe. Psychological Review, 121(2), 248–261. doi:10.1037/a0035943


    •  

     

    I have plenty of research and practical experience in neural network modelling involving tools like python, tensorflow, keras, pymc3 and tensorflow-probability, pandas, statsmodels, nltk, LENS, Matlab, etc.

     

    My lab is equipped with relevant computing hardware (e.g. GPUs) and software.

     

    {Empty}