Research Projects
Neurosymbolic AI for Clean Energy Systems
National Renewable Energy Lab (NREL)
Neural networks are extremely popular within the machine learning and artificial intelligence fields. However, they have several disadvantages including huge data requirements and lack of interpretability. Symbolic AI (or rule-based AI) do not have either of these issues but traditionally lacked the ability to learn. The goal of this project is to combine the interpretability of symbolic AI and the learning ability of neural networks for a new approach call neurosymbolic AI. We will introduce and tailor neurosymbolic algorithms to applications of clean energy, including building control, wind farms, power systems, and transportation systems.
National Renewable Energy Lab (NREL)
Neural networks are extremely popular within the machine learning and artificial intelligence fields. However, they have several disadvantages including huge data requirements and lack of interpretability. Symbolic AI (or rule-based AI) do not have either of these issues but traditionally lacked the ability to learn. The goal of this project is to combine the interpretability of symbolic AI and the learning ability of neural networks for a new approach call neurosymbolic AI. We will introduce and tailor neurosymbolic algorithms to applications of clean energy, including building control, wind farms, power systems, and transportation systems.
Modeling Collaborative Memory
Indiana University, Bloomington
Collaborative Inhibition is the phenomenon that the recall ability of a collaborative group is greater than the individual (Hinsz, Tindale, & Vollrath, 1997), but is comparatively worse than nominal groups. For example, participants may individually study a list of items such as A, B, C, D, E, F, G, H, and I. In the collaborative group, group recall is calculated as the number of answers produced by the members working together. In nominal groups, recall is calculated as the number of non-redundant answers produced by three individuals working alone. Much of the work in the collaborative memory field has been dedicated to understanding why this phenomenon occurs. One such theory is the retrieval disruption hypothesis (Basden et al., 1997) which posits that the negative effects of collaboration occur because individual retrieval strategies are disrupted during group recall. An alternative theory is that production is blocked during group recall, causing group members to forget what they were about to recall (Diehl & Stroebe, 1987). My goal for this project is to formally model collaboration and determine whether there is support for either the retrieval disruption hypothesis or the blocked/forgetting hypothesis.
The code for the model I will be using (SAM) is available on my Github page.
Published Papers:
Mannering, W. M., Rajaram, S., Shiffrin, R.M., & Jones, M. N. (2022) Modeling the Effect of Learning During Retrieval on Collaborative Inhibition. Proceedings of the 44th Annual Meeting of the Cognitive Science Society.
Mannering, W. M., Rajaram, S., Jones, M. N. (2021) Towards a Cognitive Model of Collaborative Memory. Proceedings of the 43rd Annual Meeting of the Cognitive Science Society. 959-965.
Related Papers:
Rajaram, S. (2017). Collaborative inhibition in group recall: Cognitive principles and implications. Chapter in M. Meade, A. Barnier, P. Van Bergen, C. Harris, & J. Sutton (Eds.), Collaborative Remembering: How Remembering with Others Influences Memory. Oxford University Press
Indiana University, Bloomington
Collaborative Inhibition is the phenomenon that the recall ability of a collaborative group is greater than the individual (Hinsz, Tindale, & Vollrath, 1997), but is comparatively worse than nominal groups. For example, participants may individually study a list of items such as A, B, C, D, E, F, G, H, and I. In the collaborative group, group recall is calculated as the number of answers produced by the members working together. In nominal groups, recall is calculated as the number of non-redundant answers produced by three individuals working alone. Much of the work in the collaborative memory field has been dedicated to understanding why this phenomenon occurs. One such theory is the retrieval disruption hypothesis (Basden et al., 1997) which posits that the negative effects of collaboration occur because individual retrieval strategies are disrupted during group recall. An alternative theory is that production is blocked during group recall, causing group members to forget what they were about to recall (Diehl & Stroebe, 1987). My goal for this project is to formally model collaboration and determine whether there is support for either the retrieval disruption hypothesis or the blocked/forgetting hypothesis.
The code for the model I will be using (SAM) is available on my Github page.
Published Papers:
Mannering, W. M., Rajaram, S., Shiffrin, R.M., & Jones, M. N. (2022) Modeling the Effect of Learning During Retrieval on Collaborative Inhibition. Proceedings of the 44th Annual Meeting of the Cognitive Science Society.
Mannering, W. M., Rajaram, S., Jones, M. N. (2021) Towards a Cognitive Model of Collaborative Memory. Proceedings of the 43rd Annual Meeting of the Cognitive Science Society. 959-965.
Related Papers:
Rajaram, S. (2017). Collaborative inhibition in group recall: Cognitive principles and implications. Chapter in M. Meade, A. Barnier, P. Van Bergen, C. Harris, & J. Sutton (Eds.), Collaborative Remembering: How Remembering with Others Influences Memory. Oxford University Press
Optimal Foraging in Semantic Memory
Indiana University, Bloomington
This project aims to discover further supporting evidence for the optimal foraging search model for semantic memory. In 2012, Hills et al. observed similar search patterns in semantic memory retrieval to optimal foraging patterns found in animals searching for food. This model uses data from a semantic fluency task, where subjects are asked to recall objects in a category (for example all animals they can think of in 3 minutes). There has been some opposition to this model, in 2015 Abbott, Austerweil, and Griffiths showed that applying a random walk model of memory to a well structured network representation of semantic memory could produce optimal foraging behavior. So the question becomes, are we actually using the optimal foraging model to guide our search through semantic memory or are we doing something that just resembles optimal foraging? Avery & Jones released a paper in 2018 comparing the random walk and optimal foraging search models fit to the original data collected by Hills et al. in 2012 and found that the optimal foraging model outperformed the random walk model. Current research in this topic involves comparing fMRI data to outputs of the model to determine brain activity when performing the semantic fluency task.
Related Papers:
Abbott, J.T., Austerweil, J.L., & Griffiths, T.L. (2015). Random walks on semantic networks can resemble optimal foraging. Psychological review, 122 3, 558-69.
Avery, J. & Jones, M. N. (2018) Comparing models of semantic fluency: Do humans forage optimally, or walk randomly? Manuscript submitted for publication.
Hills, T. T., Jones, M. N., & Todd, P. T. (2012). Foraging in Semantic Fields: How We Search Through Memory. Psychological Review, 119, 431-440.
Indiana University, Bloomington
This project aims to discover further supporting evidence for the optimal foraging search model for semantic memory. In 2012, Hills et al. observed similar search patterns in semantic memory retrieval to optimal foraging patterns found in animals searching for food. This model uses data from a semantic fluency task, where subjects are asked to recall objects in a category (for example all animals they can think of in 3 minutes). There has been some opposition to this model, in 2015 Abbott, Austerweil, and Griffiths showed that applying a random walk model of memory to a well structured network representation of semantic memory could produce optimal foraging behavior. So the question becomes, are we actually using the optimal foraging model to guide our search through semantic memory or are we doing something that just resembles optimal foraging? Avery & Jones released a paper in 2018 comparing the random walk and optimal foraging search models fit to the original data collected by Hills et al. in 2012 and found that the optimal foraging model outperformed the random walk model. Current research in this topic involves comparing fMRI data to outputs of the model to determine brain activity when performing the semantic fluency task.
Related Papers:
Abbott, J.T., Austerweil, J.L., & Griffiths, T.L. (2015). Random walks on semantic networks can resemble optimal foraging. Psychological review, 122 3, 558-69.
Avery, J. & Jones, M. N. (2018) Comparing models of semantic fluency: Do humans forage optimally, or walk randomly? Manuscript submitted for publication.
Hills, T. T., Jones, M. N., & Todd, P. T. (2012). Foraging in Semantic Fields: How We Search Through Memory. Psychological Review, 119, 431-440.
Catastrophic Interference in Neural Networks
Indiana University, Bloomington
This research looks broadly into the advantages and disadvantages of different models of semantic memory. In the field of Machine Learning predictive learning models are all the rage. Essentially, a predictive model learns by creating a prediction, testing the prediction, and sending an error signal back to update weights in the hopes that the next prediction will be more accurate. Neural networks that we hear about so often today are a subset of these predictive models. In the literature however, there is an ongoing debate about the usefulness of predictive models in terms of biological plausibility. A major flaw in the design of predictive models that brings into question their plausibility is catastrophic interference (CI): the tendency to forget previously learned associations when presented with new associations. Currently, I'm investigating the issue of CI, whether that be by "fixing" neural networks or introducing alternative architectures that are immune to this problem.
Not much previous research has been conducted on CI within the field of computational semantic modeling. Dachapally & Jones report the circumstances in which CI is apparent in word2vec, a modern predictive model of semantic memory, and determine the effects of CI on the underlying word representations. I have expanded on this work by investigating possible fixes for CI including elastic weight consolidation (EWC; Kirkpatrick et al., 2017) and alternate architectures such as random vector accumulation (RVA) models.
The code for the models used in this research along with the specific experiments performed is available on my Github page.
For a review of the distributional semantic modeling literature and a discussion of constraints necessary to model human semantic learning see this paper:
Necessary Constraints on Continuous Distributional Semantic Models
Published Papers:
Mannering, W. M. & Jones, M.,N. (2020) Catastrophic Interference in Predictive Neural Network Models of Distributional Semantics. Computational Brain & Behavior. 4(1), 18-33. https://doi.org/10.1007/s42113-020-00089-5
Related Papers:
Mannering, W. M. & Jones, M. N. (2020) Catastrophic Interference in Neural Network Models of Distributional Semantics. Computational Brain & Behavior. https://doi.org/10.1007/s42113-020-00089-5
Dachapally, R. R. & Jones, M. N. (2108). Catastrophic Interference in Neural Embedding Models. Proceedings of the 40th Annual Meeting of the Cognitive Science Society.
J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A.A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. (2017) Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. USA, 114 (2017), 3521-3526.
Indiana University, Bloomington
This research looks broadly into the advantages and disadvantages of different models of semantic memory. In the field of Machine Learning predictive learning models are all the rage. Essentially, a predictive model learns by creating a prediction, testing the prediction, and sending an error signal back to update weights in the hopes that the next prediction will be more accurate. Neural networks that we hear about so often today are a subset of these predictive models. In the literature however, there is an ongoing debate about the usefulness of predictive models in terms of biological plausibility. A major flaw in the design of predictive models that brings into question their plausibility is catastrophic interference (CI): the tendency to forget previously learned associations when presented with new associations. Currently, I'm investigating the issue of CI, whether that be by "fixing" neural networks or introducing alternative architectures that are immune to this problem.
Not much previous research has been conducted on CI within the field of computational semantic modeling. Dachapally & Jones report the circumstances in which CI is apparent in word2vec, a modern predictive model of semantic memory, and determine the effects of CI on the underlying word representations. I have expanded on this work by investigating possible fixes for CI including elastic weight consolidation (EWC; Kirkpatrick et al., 2017) and alternate architectures such as random vector accumulation (RVA) models.
The code for the models used in this research along with the specific experiments performed is available on my Github page.
For a review of the distributional semantic modeling literature and a discussion of constraints necessary to model human semantic learning see this paper:
Necessary Constraints on Continuous Distributional Semantic Models
Published Papers:
Mannering, W. M. & Jones, M.,N. (2020) Catastrophic Interference in Predictive Neural Network Models of Distributional Semantics. Computational Brain & Behavior. 4(1), 18-33. https://doi.org/10.1007/s42113-020-00089-5
Related Papers:
Mannering, W. M. & Jones, M. N. (2020) Catastrophic Interference in Neural Network Models of Distributional Semantics. Computational Brain & Behavior. https://doi.org/10.1007/s42113-020-00089-5
Dachapally, R. R. & Jones, M. N. (2108). Catastrophic Interference in Neural Embedding Models. Proceedings of the 40th Annual Meeting of the Cognitive Science Society.
J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A.A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. (2017) Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. USA, 114 (2017), 3521-3526.