[Shameless self-plug, hope that’s ok]
tl;dr: Join our Kaggle competition to trans-decode brain data and win up to 1000$!
Can you actually decode which images people were thinking of from data recorded while they were visually seeing these images?
In memory research, decoders are often trained on localizer data to extract neural activation patterns of specific stimuli (e.g., “what pattern occurs when we see a clown”). Taken as evidence for reprocessing, these decoders are then used to detect pattern presence in other parts of the experiment (e.g., “after learning something about clowns, we can see the activation pattern of ‘clown’ being more active than expected; that means knowledge is being consolidated”). But how well does this actually work? Can we trans-decode from one modality to the other?
While we know that concept cells in the MTL are reacting to concepts invariant to the sensory modality, it is rather unlikely we can measure these in the MEG. But maybe some correlated signature is shared between different sensory modalities anyway?
We recorded a simple paradigm in the Magnetoencephalograph (MEG) in which we showed ten different visual items many times to participants. Later, we played back spoken words describing the items while they had their eyes closed and asked them to mentally visualize the items associated with the word as vividly as possible. Can you train a decoder on the visual presentation (localizer) and decode which item people imagined?
Join now and win prices of total 1000$: IMAGINE-decoding-challenge | Kaggle
