VISUAL SYNTHESIS | June 2025 | VOLUME 2
‘Real Brains and Virtual Worlds Validating a Novel Oddball Paradigm in Virtual Reality’
With Eden X Redman
Eden Redman is a cognitive neuroscience researcher whose
work explores innovative, technological enhancements to electroencephalography (EEG) research.
DISCUSSION
Eden Redman is pursuing his MSc in Neuroscience at McGill University, where he brings a unique blend of technical curiosity and innovative passion to the field of electroencephalography (EEG) research. In this discussion, we explore how virtual reality headsets could provide a light-weight alternative to exclusively lab-based environments when conducting EEG research, as presented in Real Brains and Virtual Worlds: Validating a Novel Oddball Paradigm in Virtual Reality. See the transcript.
ILLUSTRATION
STATIONERY
TRANSCRIPT
Erin: Hello, Eden. Thank you so much for joining today. Today we'll be discussing the publication entitled Real Brains and Virtual Worlds Validating a Novel Oddball Paradigm in Virtual Reality. So Eden Redmond, thank you so much.
Eden: Happy to be here.
Erin: So I wanted to start with the methodology of electroencephalography. Can you tell me a little bit about what electroencephalography is?
Eden: For sure-- so that is simply looking at the electrical activity of the brain through sensors, or electrodes, affixed to the scalp.
Trying to get as good a connection as possible. I believe in this study we would've used wet electrodes.
We use like a conductive gel, in order to get a strong signal between the underlying brain activity in the electrodes themselves.
Erin: I was curious, if you had hair like mine, do you have to put something on top of the skull or can you just connect the electrodes to any surface?
Eden: Typically you're wearing a cap.
Erin: Okay.
Eden: It ensures that the hair is like pressed flat, that there's sufficient connectivity between each electrode. It also is important for ensuring the location of electrodes.
It's basically just picking up electrical activity of the neurons, the electrically active cells beneath the skull.
And then an aggregation of their activity of the cell population beneath it. You put on a number of electrodes. I think this one we would've used either 16 or 32 electrodes.
There's like… a convention I think it's called, uh... a montage, it's called.
A montage is a set distribution of electrodes.
And so there's really only like a, a few conventional coordinate systems of placing electrodes, and that one is pretty frequently used.
So each of those would yield a single point voltage. So that is to say like each step in time, you simply get a singular kind of floating average of the electrical activity of the surrounding cells. And it's like, millions upon millions of cells. And you could argue that it's the aggregation of like the entire brain as it's being recorded by each electrode. It's very noisy. What really happens is you get a fairly steep drop off in your ability to detect the influence of neurons as you move away from the electrode.
Yeah, basically, uh, the different coordinate systems are how you either define those anchor points at the starting and how much you divide it up. And there's just different caps for the different coordinates, typically.
This is important because there is a standard, fairly normalized general map that humans have their brains organized around.
That's just emerged. And it's not always like, a hard and fast rule.
Like some people you can cut out half of their brain.
Erin: Oh my gosh…
Eden: And they'll survive if it's, if they're young enough. So obviously that one's not gonna work too well. For the vast majority of the population, theres…There's gonna be some variation.
Erin: So when you are conducting this research, were people wearing the caps as well as a VR headset?
Eden: Yeah.
Erin: I'm just kind of curious, like how long did it take to kind of prep the participants?
Are we talking hours?
Eden: Uh-- it really depends. It wasn't like a super long experiment either, so-- it's definitely under two hours for everything including setup, so wasn't too onerous.
Erin: So another term that I had to look up was the Event Related Potential experiments, which measure brain responses to specific stimuli. Can you maybe break that down into something maybe a little bit more accessible for someone like myself who's outside of this field.
Eden: So each electrode has a point voltage, which is just an aggregate of the activity of which is like it linearly weighted to the distance to the electrode , does that make sense?
Erin: Kind of? I drew a picture of the electrode that's like on a person's head...
Eden: Here's the skull…
Erin: Yeah.
Eden: And then beneath that, you have a bunch of cells. The cells directly underneath are gonna be contributing a lot to that signal.
Erin: Yeah.
Eden: Comparatively... but the cells adjacent to the cells directly underneath are also going to be contributing.
So it's like, um, you get this like linear drop off, but you still get the influence of adjacent brain regions.
Erin: Wow.
Eden: We're using this templated grid that other people also have adopted within their research. So without using like fancy placement systems, we can get pretty good placement and overcome a good amount of that variability.
With ERPs, it is essentially sayinghere are these like anchor points of activity.
And in time, we're gonna present some stimulus. You have a participant. Have been set up, we're getting a point voltage for each electrode.
Most setups are something along the lines of you're able to see all of the channels in real time, and they're fluctuating point voltages, which are the average of the cells adjacent with a linear drop off. I kind of stress that point because a lot, it's easy to come across this and be like, oh, well, it's simply just the brain activity directly underneath.
And but it's, it's quite a bit more complicated than that.
Erin: Yeah. It sounds like there's depth to it. The way I'm drawing it, at least it feels like it's actually like you're kind of operating in the Z direction.
Eden: Yep. Yeah. And another component of that is you're actually getting a much proportionally stronger signal to cells who, whose axon are perpendicular to the plane of the electrode.
Erin: That one's fun. Perpendicular to the plane. Oh, the perpendicular.
Eden: You got it. I think you got it. Basically electrode, this is gonna be the, the ones that signals are most strongly picked up.
Erin: Yeah.
Eden: Uh, and it's just due to electromagnetic properties of perturbing. Fields affecting fields.
Erin: Fields affecting fields.
Eden: Abstracted to that. But you also pick up non perpendicular, but it degrades actually... nonlinearly.
But with all those caveats. We have a point voltage, which at a higher order, represents brain activity underlying the electrode with like three asterisks. And then from there you can present the same type of stimulus set.
So typically if you want a controlled experiment, you don't wanna deal with the confounding variables-- things that aren't the thing that you're interested in... one way to do that on the physiological level is to get participants to fixate on some neutral stimulus. So whether that's like a fixation cross.
Erin: Oh, okay. Yeah. Like this, just like cross in the middle of a screen or something.
Eden: Yeah. And so that's to get them to always be starting from, uh, some neutral point.
Erin: And is this like in a normal experiment or are we already talking about like. The, when they're actually using, like the VR headset,
Eden: Uh, this is pretty standard.
Erin: Okay. So I'm just gonna draw a picture of someone not wearing a headset.
Eden: Well, okay. So I'll actually take a run at it from an even broader perspective. Empiricism is making a prediction, designing an experiment, ideally you're trying to control for the confounds. And then you run that experiment and you see the outcome.
Now that isn't super feasible with, uh, cognitive neuroscience, certainly with EEG, because it's noisy. Like, the one common misconception is that, 'oh, you're using 10% of your brain at any given time'. The reality is hundred percent of your brain pretty much a hundred percent of the time, otherwise you'd be dead.
So, um,
with the nature of the methodology and the caveats that I mentioned. Uh, basically have zero probability of being able to make a definitive statement about the functioning of the brain and time to stimulus or inferring like internal states, like, uh, internal cognitive states. By just running an experiment one time.
And so that's where ERPs come in, is it's a standard way to run the same sort of experiment multiple times over and have some strong constraints on the initial conditions. So basically you do, you boil it down to a trial and maybe you have many different types of trials. Yep. This one was pretty simple.
Simply just two trial types.
Erin: I think I was able to capture a little bit more of kind of like the role that Event Related Potentials play in all this. So that was really helpful.
Eden: Basically, the variability of a single trial and a single person is so high that you just need to have many, many, many trials and many people to be able to kind of hone in on an effect or make anything even approximating a definitive statement.
Erin: Tell me about oddball tasks.
Eden: This is, I think, a natural progression from ERPs.
ERPs are just like a broad category of experimental design. So we're conducting ERP analysis, you have two sets of stimulus-- the oddballs. One is infrequent and another is frequent.
So it could be virtually any sensory modality. You can have an auditory oddball, pretty sure you could have a tactical oddball.
Basically, the first portion of it was to just demonstrate that there was coherent brain response independent of whether we were using like a traditional monitor versus a VR headset with the same paradigm.
Erin: Okay. When I was doing a little bit of cursory reading, it seemed like calibration? The oddball task was the thing to like actually detect the stimulation, I guess, or like see if the brain was responding to something.
Eden: Yeah. The first part of the study, it was the same paradigm in either just like a traditional monitor versus a virtual environment. And so I. I guess in that sense it was just like ensuring that there was no significant difference.
Erin: Okay. Going back to the infrequent and frequent nature of an oddball task, do you have like, an example?
Eden: Yep. It could be beeps and boops.
Erin: Ah, okay.
Eden: Like auditorily, you could present a boop. The lower is the standard stimulus, the one that you're presenting like 80% of the time. Then every so often, 20% of the time, you have a beep-- high pitch tone. You could flip those.
It's quite arbitrary. It's just demonstrating that you have a identifying mechanism. So you can compare the brain response, i.e. the voltages across time, across the electrodes in response to the oddball. And then you can do a comparison of the average brain response to the beeps versus to the boops.
Erin: I'm just drawing beeps and boops everywhere on this illustration.
Eden: In this experiment it was visuals, quite a bit more relevant.
Erin: The visual one. I feel like I've, I saw some paper, or in the paper I saw some images of that. Can you describe a little bit of like what the visual difference was between a frequent and infrequent oddball task?
Eden: Uh, yep. Frequent, uh, versus infrequent. Uh, basically just color.
Erin: Okay.
Eden: So we didn't vary. Anything else... like the size or the shape.
It is also like assuming that the person isn't like colorblind or partially colorblind. It's kind of an arbitrary thing.
Okay. It's basically that you can discriminate between the two. Yeah.
Erin: What is HEOG?
Eden: It's a subset of EOG. It is just the eye version of EEG-- basically, electrooculography.
Erin: Eye version! Like, literally like the seeing eye.
Eden: Yes. H-E-O-G is just like placing electrodes horizontally about the eye.
Erin: Wow. Okay.
Eden: And vertically is, placing them vertically about the eye. Why did we do this? So EEG is comparatively small signals, like millions of a volt.
Eye movement, muscle activity; your eyes are moved by muscles, and the electrical activity there is one to two orders of magnitude stronger. And so you need to be able to control for (it) at some point in post. So, in the analysis, electrical profile of the eye movement allows you to regress out that activity.
Erin: ... Regress out the eye muscle activity?
Eden: Yeah. So that you can be left with more or less just the brain activity.
Erin: That's cool. It's like, 'we just want to focus on brain activity... there's a little bit of noise from eye muscle activity... therefore we have to measure that eye activity so we can factor it out later in the analysis'.
Eden: That was a more succinct way of saying it. Yes.
Erin: We've made it to the actual questions. Thank you so much for going through the terminology with me. What inspired you to pursue electrical brain activity research, like you personally and how did that curiosity lead you to explore it in combination with VR?
Eden: I started doing research in my late second year of my undergrad. My current studies are less immediately focused on that. I volunteered in a few different labs. I've been a part of like six or seven different labs.
Erin: So you're kind of like touring a little bit to see what's out there, and where did you have the greatest interests?
Eden: What led me to explore VR is just the opportunity arose in my lab and I was like, that's interesting. I didn't actually have a functioning laptop until my second year of university.
Erin: Exciting.
Eden: So I was like, I was not an overly technical person until then. That changed pretty quickly.
Erin: You became more technical? Did this experiment kind of make you a little bit more interested in exploring more technical domains?
Eden: I think by the time I got to playing with VR, it was the opportunity to get exposure to a new language and framework.
Erin: What's the language and framework?
Eden: Previously it was mostly Python and MATLAB. The experiment was coded in C primarily-- in the Unity engine.
Erin: I imagine Unity gives you a lot of flexibility to explore like, 3D animation, gaming. There's, it's fairly versatile.
Eden: A lot of it was like scripting based.
Erin: What motivated you and your cohort to research the potential of using VR with electroencephalography?
Eden: Fairly accessible tech and it wasn't too onerous to synchronize them. One challenge with playing around with new tech is that you need to have a fine degree of synchronization.
Erin: What does that mean?
Eden: You have to ensure that different clocks are aligned. When you're like presenting stimulus and embedding, when you're demarcating those events in the data stream, that those are actually at the right time. That was probably the biggest hurdle. It was just like a software and hardware challenge.
Erin: So it's like the software that's actually measuring all of this, the hardware that's taking in the input.
So before you were running any of the experiments, what were some of your personal predictions for what you might observe?
Eden: Pretty much that, at least for the first one, that we wouldn't get any significant difference between the standard monitor and the Vive.
Erin: Why did you think that?
Eden: I think there have been other literature demonstrating that you get as strong brain responses, if not stronger to the traditional monitor.
Erin: Okay.
Gotcha. Okay. Across the two experiments, how did you feel about the sample size?
Eden: Pretty good. Given that we just had the two conditions. Yeah. And the variability was not too crazy.
Erin: Can you talk to me about the two conditions that you were experimenting with?
Eden: The one being the VR headset and the other being the traditional monitor. And so each participant would come in. They're just viewing like the flat image. So it comes off as like a, a circle essentially. The standard and the target stimulus are just two differently colored circles. And then in the, in the VR they're viewed more as spheres because you have, uh...
Erin: Depth.
Eden: You have depth. You have two different screens that are slightly offset from one another.
Erin: I kind of wrote this out as like for each participant, they had experiment one that was like the standard way of looking at the target and then the experiment two was when they actually had the headset on.
Eden: Yep. One important piece is that we made sure people had a chin rest so that they wouldn't move their head. Most important one is that their, that their head is fixated and they're not getting additional duct cues. Yeah.
Erin: Now after we've like, talked a little bit about the kind of grid system of, the cap and like the way that the electrodes are placed on the person's head. I don't know if this next question makes much sense anymore, but at the time when I was writing this, I was kind of curious if there was like a particular area of the brain that you were monitoring.
Eden: It was pretty even spread, we had all the standard landmarks.
Erin: So how did you choose the headsets to use? We've got the HTC Vive versus the View Pix monitor,
Eden: Accessible enough and had high enough resolution; both in terms of the resolution of the screen and also temporal that we could change the stimulus quick enough.
Erin: It is accessible and you were able to make changes quickly and accurately.
Eden: Yeah, the temporal frame rate.
Erin: So can you walk me through how you all administered experiment one and two? Were you in a lab?
Eden: Yep, both in the lab. We kind of randomized the condition.
Like some participants would do the monitor first and then someone do the VR one first.
Erin: Ah, okay. What should future experiments take away from the findings with experiment one?
Eden: I don't really like the standard experimental paradigm. So I think it is a pretty good case for running more in VR.
Erin: Why don't you like the standard?
Eden: Oh, everyone just falls asleep.
Erin: What? Okay.
Eden: It's really boring.
I mean, they're often meant to be like cognitively taxing. I don't know how much better that will be in VR. I tend to absorb information better when it's spatial, so...
Erin: Yeah. I do not expect that. Okay. So going back, you said that these are meant to be cognitively taxing. I don't think I appreciated that as much when I was reading this. I didn't, I, I thought that it was like you're responding to stimuli on a screen.
Eden: Yeah. This experiment was not intended to be overly taxing, but, uh, other ones certainly.
Erin: What does it mean to be taxing in this context?
Eden: Just like it takes a high degree of attention.
Erin: Gotcha.
Okay. And is it, is falling asleep a metaphor or is it literally like, 'no, for real. There's such a like significant cognitive load. People are actually dozing off'.
Eden: Yeah.
Erin: Oh my gosh. I love humans.
I am glad I captured that.
Bringing this back around, this is like a really grandiose way of saying it, but it sounds like the experiment one proved that like VR was viable for running these types of tests.
Eden: Yep.
Pretty much there was no big red flags.
Erin: Okay, great. Experiment two. So, to test depth with traditional EEG setups, is the screen actually physically move closer and farther away from the participant? That's how I was seeing it. I don't know if that's true.
Eden: Typically depth is not a part of those studies because you can't really do depth like you can in VR. It's just not possible. And so it's another reason why I was interested in VR adoption in computational neuroscience labs because you can just present stimulus that you otherwise can't. Kind of opens new doors.
Erin: Why test for depth? What do you think being able to like study a person's perception of depth will offer to future studies?
Eden: Well, I mean, there are people who know a whole lot more about depth perception than I do.
We were pretty much coming at it, myself included, fairly naively. The way we varied the stimulus with the second experiment instead of the color being the parameter that was changing-- it was the depth. So what we did was we size matched the stimulus, so the one farther away was bigger so that it occluded the same area on the retina. And so the only thing that was changing was the depth. And so we were able to get an oddball just to varying that one dimension.
Erin: I was setting up the drawing a little bit for this. I'm picturing a person that's like kind of staring in a field of view. You mentioned that there would be like this sphere that was a little bit bigger or it was a certain size at a certain distance and then as it got closer it was like getting bigger?
Is that the general shape of things?
Eden: The perceived size stayed the same. As it was receding it got bigger. If you like, held up your thumb, the one close and the one far away would be the same size.
They were size matched independent of distance.
Erin: Interesting. I'm gonna have to show you this drawing when we're finished with this so we can see if that's right. It's like a little bit counterintuitive. I don't know why...
Eden: Perceptually, we wanted to make sure that the view of the image, like on the retina was the same, whether it was small and close or big and far. So the perceived would be small and as it moved back in space, it would get bigger.
Erin: Okay. All right. I think I drew this right.
Can you help me understand the significance of the findings with the HEOG deflections? To go back to what we were talking about before, HEOG mostly measure the eye muscle activity so that you can factor that out later during analysis.
Can you connect that to what was happening with experiment two a little bit more concretely?
Eden: Well, we would expect them to be moderately different because that's like the one element that is changing. Again, like the size of the orbs on the retina, but um, the relative positioning within each is changing.
The fixation cross was like a spatial, it was like in, in between. So we would expect there to be different HEOG signals from the one that was closer versus the one that was far.
In between the, the stimulus, like the starting point was a fixation cross in the middle.
Erin: Which conclusion, if anything, surprised you the most about this experiment?
Eden: Just how robust it was.
Erin: How was it robust?
Eden: Just a really strong oddball, ERP.
Erin: If you could do this experiment again, what would you do differently, if anything at all?
Eden: Maybe have one additional HEOG so you could do a re-reference to it. I think we had a, an interesting configuration. Typically you do one here and one here, either side on the temples. But I think for this one we had one here and one here.
Erin: Oh, so ideally you'd want 'em on both sides and then like one in the middle.
Eden: Yeah, one on one side of the nose.
Typically you're looking at eye movements, where the eyes are locked. So they're both looking left or they're both looking Right. This depth is a condition where the eyes are actually doing two different things.
Erin: Ooh, okay.
Eden: They're, but they're doing them like symmetrically.
Erin: And so in the experiment you only had like one on the side and one in the center. And what you would do differently in the future is have them on both sides and in the center.
Eden: Pretty much.
I'd be really curious to see if there's any way that can look at depth and the ERPs of depth to be able to investigate embodiment.
Erin: Embodiment?
Eden: Yeah. As well as look at cross-cultural differences. That would add a whole bunch more caveats, but I think there is a lot of compelling research from cognitive psychology around how much attention people from different cultures give to different elements within a scene.
I think the actual like motor programs of how we process and perceive depth, I think would likely be fairly integrated into that tendency beyond just like a cognitive difference.
Erin: Cool. All right, great.
Eden, thank you so much. This was awesome.
Eden: Yeah. Happy to help.