Caglar, L.R.*, Corbo, J.*, Erkat, O.B., Polack, P.-O. (2025).
#manifold-capacity #representational-geometry #dimensionality #visual-cortex #orientation-discrimination #mice #two-photon-calcium-imaging
... COMING SOON ...
Caglar, L.R.*, Walbrin, J., Almeida, J., Mahon, B. (2024). In Revision.
[paper] [repository]
#componentiality #representational-geometry #predictive-encoding #kinematics #action-representation #parietal-cortex #CNN #concept-representation #fMRI
Successfully identifying and interacting with objects requires a combination of high-level visual knowledge, understanding the object’s intended function, and how to grasp and manipulate them. How are these components combined into an object’s whole representation? We built a predictive encoding model to empirically investigate the componential nature of object-directed action representations, hypothesizing that they are componentially built in parietal cortex out of a set of elemental hand movements that facilitate grasping and manipulating of objects. We show that a generative model based only on a linearly weighted combination of object-associated hand movements can successfully predict the objects’ action-related fMRI activity in supramarginal gyrus. RSA searchlight analyses with three large CNNs (AlexNet, ResNet50, VGG16) further confirmed that only action-associated features and not visual features can account for the variance in parietal cortex. While this study offers important new insights into the componential mechanism of action representation and information representation more broadly, it also proposes a novel method to reverse engineer and isolate neuromotor signal from visually presented manipulable object representations.
Caglar, L.R., Mastrovito, D., Hanson, S.J. Under Review.
[paper] [poster]
#representational-geometry #neural-manifolds #multidimensional-scaling #1/f #metricity #concept-representation #topology #fMRI
What is the shape and structure of the space in which the brain represents objects and concepts? We investigated the neural structure of conceptual space using three publicly available neural datasets (fMRI and electrophysiology) that sample from a high number of visually presented concepts. We observed that in all cases the underlying representations shaped into a spherical manifold when using non-metric multidimensional scaling – a common dimensionality reduction technique used to visualize and inspect the underlying structure of high-dimensional data. To disentangle whether the observed scaling into a spherical manifold is an intrinsic property of the space or an artifact of the dimensionality reduction method, we investigated a multitude of variables that may affect the shape of the derived manifold in neural and simulated data. This revealed that the satisfaction of the triangle inequality (3rd metric axiom), the data’s inherent category structure, and its frequency spectrum are crucial in distinguishing between artifact and intrinsic property and indicate that spherical manifolds could be an intrinsic topological feature of how the brain represents conceptual knowledge.
Euclidean vs. Non-Euclidean Metricity of Neural Representational Spaces
Caglar, L.R., Hanson, C., Hanson, S.J. (2021).
[paper]
#representational-geometry #similarity #metric #ultrametric #cognitive-modeling #concept-representation #fMRI
A longstanding debate in cognitive science is based on two competing mathematical theories of similarity that make distinct predictions about the structure of mental representations and how to model the representational space they are stored in. Taking a mathematical modeling approach, we compared two such theories to examine whether a Euclidean space (e.g. fit by multidimensional scaling (MDS)), based on Shepards’ Universal Law of Generalization (1964, 1987), or ultrametric embedding space (e.g. clustering), based on Tversky’s set-theoretic contrast model (1977), is more appropriate to characterize the latent representational embedding space and whether they are congruent across minds and brains. Using the case of colors and letters, a human fMRI study showed that the metric structure resulting in a color wheel can be reproduced from V4 data. Surprisingly, however, the behaviorally ultrametric case of letters also revealed a metric structure in Extrastriate cortex (V2-V5), suggesting that the brain might favor a Euclidean metric representational space.
Structural Similarity Analysis: A Toolbox for Analyzing the Metricity of Representational Spaces
Caglar, L.R., Hanson, C., Hanson, S.J. (2021).
[dissertation] [paper]
#representational-geometry #similarity #metricity #ultrametric #cognitive-modeling #concept-representation #fMRI
A key challenge in neuroscience and cognitive science is to understand the organization and format of concepts representation. Part of solving this problem is to accurately characterize the structure of the representational space, which caries important information for how information is organized. We developed a mathematical and computational toolbox of methods to help characterize the representational structure and metricity of similarity spaces that can be based on behavioral data, neural data, and data from artificial neural networks.
Behavioral and Neural Representations show a Perceptual-Conceptual Dichotomy for the Structure of Conceptual Space
Caglar, L.R., Hanson, C., & Hanson, S.J. Under Review.
[poster]
#representational-geometry #similarity #metricity #ultrametric #topology #cognitive-modeling #perception #concept-representation #fMRI
In this paper we test the metric bias hypothesis of conceptual space proposed in Caglar et al. (2021). To test this hypothesis and have better control over the internal feature structure of the stimuli and how that relates to the space’s representational geometry, we applied a dense rather than sparse sampling by selecting conceptual subspaces with carefully constructed naturalistic categories consistent of an integral (correlated features) category structure (e.g. birds or furniture) and a set of separable (uncorrelated features) categories (Shepard et al., 1961). Both in behavioral and neural fMRI data, natural categories showed a better fit to the ultrametric space whereas the unnatural Boolean category showed a better metric fit. This revealed that there might be a perceptual-conceptual dichotomy for the resultant representations’ metricity and geometry, with conceptual categories favoring an ultrametric structure that can accommodate hierarchies and perceptual categories with a smoother similarity gradient favoring a metric space.
Decoding Second Order Isomorphisms in the Brain
Hanson, S.J., Caglar, L.R., Hanson, C. (2020).
[paper]
#neural-decoding #representational-geometry #second-order-isomorphism
In this paper we introduce a new method for decoding neural fMRI data. It is based on two assumptions, first that neural representation is distributed over networks of neurons embedded in voxel noise and second that the stimuli can be decoded as learned relations from sets of categorical stimuli. We illustrate these principles with two types of stimuli, color (wavelength) and letters (visual shape), both of which have early visual system response, but at the same time must be learned within a given function or category (i.e. color contrast, alphabet). Key to the decoding method is reducing the stimulus cross-correlation with a matched noise voxel sample by normalizing the stimulus voxel matrix. This unmasks a highly discriminative neural profile per stimulus. Projection of this new voxel space to a smaller set of dimensions, the relational information takes a unique geometric form revealing functional relationships between sets of stimuli as second-order isomorphisms.
Shallow vs. Deep Artificial Neural Networks: Modeling Human-like Attentional Bias
Hanson, C., Caglar, L.R., Hanson, S.J. (2018).
[paper]
#representation-learning #deep-learning #category-learning #attentional-bias #mechanistic-interpretability
A crucial part of understanding information representation is to understand the processes that give rise to it, namely information compression mechanisms that underly processes of abstraction and generalization and consequently facilitate learning. A different line of my research has explored this by investigating whether artificial neural networks and humans show the same attentional biases in category learning when the stimulus features are separable or integral and are matched with a categorization rule that calls for attention to a single feature (filtration) or multiple features (condensation; Garner, 1974). Comparing the learning dynamics of humans and shallow vs. deep ANNs, we found that both deep and shallow ANNs could reproduce the human-like attentional bias when given high dimensional stimuli, but that this was characterized by different internal representations and learning curves hyperbolic vs exponential curve respectively.
Against a Cognitive Functionalism for Artificial Neural Networks: Commentary on Lake et al. (2017)
Caglar, L.R. & Hanson, S.J. (2017).
[paper]
#representation-learning #deep-learning #learning-dynamics #mechanistic-interpretability
The claims that learning systems must build causal models and provide explanations of their inferences are not new, and advocate a cognitive functionalism for artificial intelligence. This view conflates the relationships between implicit and explicit knowledge representation. In this commentary on Lake et al.'s (2017) paper in Brain and Behavioral Sciences, we present recent evidence that neural networks do engage in model building, which is implicit, and cannot be dissociated from the learning process.