Category Science Behind the Tech

Distinguishing EEG Signals from Other Frequencies

Our physical bodies are fascinating, functioning as sources of several different kinds of non-visible electromagnetic activities.

For instance, when you visit a doctor, and they check your heart using an ECG (electrocardiogram), they are measuring the electrical activity of your heart. Similarly, our brain’s activity can be captured through what’s known as EEG (electroencephalography) signals.

EEG signals are essentially the ‘brainwaves’ that are produced by the electrical activities occurring in our brain. These signals are crucial for understanding different cognitive states. For example, certain EEG patterns can indicate whether a person is awake, asleep, or even in deep meditation.

EEG vs. ECG

It’s interesting to note that while the heart’s ECG signals are stronger and have different patterns (like the familiar heartbeat pattern seen in medical dramas), EEG signals are unique to brain activity. They are more subtle and complex, reflecting the intricate workings of the brain.

Figure 1- Comparative Amplitudes: This side-by-side illustration shows the difference in electrical signal strength between EEG and ECG recordings. The EEG (down) peaks at 0.02 mV, showcasing the subtle electrical activity of the brain, while the ECG (up) reaches up to 2 mV , reflecting the heart’s more robust electrical activity.

 The Body’s Electrical and Chemical Signals

Our body operates through both electrical and chemical signals. For example, when you feel happy, chemicals like dopamine are released in your brain. This can change your EEG patterns, reflecting your altered mood. Similarly, if someone consumes alcohol, this chemical change in the brain can also be detected through changes in EEG patterns.

Another example: Imagine someone on a Netflix binge, deeply engrossed in a gripping series. As they experience various emotions – excitement, suspense, joy – their brain’s electrical activity changes. These changes in EEG patterns can be indicative of their engagement level and emotional responses, showcasing the dynamic nature of our brain’s electrical activity in response to external stimuli.
Mosh and I had an intriguing conversation last week in Toronto about the types of EEG data we can detect and the inferences that can be drawn from them. We discussed how EEG data, altered by factors like emotional responses to a movie or the effects of substances like alcohol, can provide insights into a person’s cognitive state. However, it’s crucial to note that while EEG can offer a window into brain activity, it still does not allow us to read specific thoughts.

Detecting EEG Signals

While EEG signals are primarily detected on the scalp skin or in ear, advanced techniques can sometimes detect these signals a very short distance away from the scalp. However, the idea of reading specific thoughts or psychic communication through EEG is currently not supported by scientific evidence. EEG provides a general picture of brain activity, not a detailed ‘read-out’ of thoughts.

Figure 2- Quantitative EEG Task Response: Slower and Weaker Brain Activity in Alcoholics Compared to Non-Alcoholics.

Individuality of EEG Signals

We received a question the other day about signals overlapping if there are several individuals together in a room and they all have their earbuds in – would that pose a problem.

In environments where multiple individuals are present, each person’s EEG signals remain unique to them and do not overlap. This is because EEG requires close proximity sensors (in-ear or over scalp), for accurate detection.

So, if a group of people in a room are all wearing EEG headsets or in-ear EEG earbuds, their brainwave data would remain distinct and separate.

We welcome questions about EEG and other types of bodily signals. Our goal is to help clarify these complex topics and make them understandable, even for those who are new to neuroscience.

Science Behind the Tech: How our EEG-Sensing Earbuds Work

In this post, I’m going to explain more about how the earbud-to-device system works.

There are really four pieces to the apparatus, if you will — brainwaves to earbuds to processing algorithm back to device.

EEG → EARBUDS HOST DEVICE PROCESSING HOST DEVICE (APP)

At a high level: analog EEG signals are picked up by our flexible sensor from the ear canal. An EEG module in the earbuds sifts through the noises and deciphers the electrical brain signals to convert them to data. The data passes to the Bluetooth module where it’s transmitted to a host device (usually a smartphone) and the host completes a processing sequence to send a resultant signal to our servers (based on the app’s embedded instructions) which is processed by our proprietary machine learning algorithms. Results come back to the host and are reflected to the user.

Now, in more detail.

1. Naturally-Occurring Brainwave Data

Here’s how brainwave data occurs:

  • Electrical signals are continuously firing inside the brain’s neural network, autonomously and irrespective of conscious intent. Each type of signal has a very specific features which we can measure.
    • Performance: Memory, Bandwidth/Cognitive Load, Attention (Sustained, Divided, Executive, Selective)
    • Reasoning: Deductive, Inductive
    • Thinking: Fast, Slow
    • Sensory: Visual, Auditory
    • Feeling: Fear, Love, Happy, Sad (more sustained state-of-being)
    • Mood: Nervous, Anxious, Calm, Angry Frustrated (more acute, sharp emotion)

2. Data Recorded via yourBuds

We’ve gotten some questions around radiation, safety, and whether signals pass through the brain. The answer is that, like all Bluetooth devices, ours will emit low levels of non-ionizing radiation. Exposure to low amounts of this type is standard and not harmful to humans. Also, signals do not pass through the brain at all. We’re merely reading and recording the brainwaves, exactly like a neurofeedback test with the electrode cap — we’re just condensing that same technology from a multi-nodal scalp to in-ear.

» Related: Validation Study Shows In-Ear EEG Outperforms Brainwave Cap

Here’s how the earbuds work:

  • Dual-channel EEG reception (one earbud in each ear)
  • Conductive rubber nano-composite acting as soft in-ear component (looks similar to your typical silicone ear tips, with different material)
    • Connected to EEG sensor and the PCB where analog and digital ICs are located
  • Microprocessor also serves the audio related functionality
  • At the center of rubber nano-composite is audio module consisting of a speaker, microphone and vibration sensor
    • All packaged in a vibration-damping enclosure

3. Data Transmitted to Host Device

Here’s how we automate the processing, interpretation and integration of individuals’ EEG data:

  • After brief denoising, signals are transmitted to microprocessor, sent via Bluetooth to the host device (smartphone).
  • In the host device, signals are received, and processed and vectorized (to decrease the packet size) before being sent over the Internet to our servers for cloud processing (end-to-end encrypted)

4. Data Sent to Servers for Processing, Indexing, Classification

Here’s how we process and make sense of the data to prepare it for use:

  • Processes yield to real time emotional classification, attentional level indexing, memory workload
    • Indexing is based on established research in the field, and has become possible using our proprietary AI for organizing, classifying, and predicting EEG signals with high accuracy in real time
    • Results then streamed back to host device
  • We plan to extend cognitive domains and incorporate more features to form a 360-degree cognitive profile to expand capabilities to entertainment, learning, and more

5. Algorithm Reflects Outputs as Experiences in yourSpace

Here’s how the algorithm and mobile interface uses the information streamed back from our servers, and how it work on an individual basis:

  • The app, still being built, will house three native apps (built by neocore) corresponding with three discrete use cases
    • User can also enter preferences into a preference center and program or instruct the app of an overall objective if they have one
    • Algorithm is constantly pulling in this real-time emotion, attention and mood data
      • RL algorithm as control layer which is in charge of how to use these techniques and what combination works best for each user
        • RL agent improves as individual users continue using with time
  • Will leverage APIs to other streaming apps (generative music, podcast, audiobook summaries, etc.) for data sharing (meaning you can do SSO from yourSpace to these apps)
    • With earbuds in, app integrations in place, permissions set, and users allowing app to act when prompted, custom device experiences can be had across use cases
      • Use case #1: Audio Augmentation. Dynamic rephrasing, recombining and regenerating of (monologue-only) podcast episodes to maximize attention. Includes autonomous discarding/elimination of excess, segment re-ordering, and imperceptible transitions (using GenAI to mimic podcasters voice to bridge the quick gaps)
      • Use case #2: Adaptive Music. Mood-based playlists which dynamically populate in real time, bringing in best-fit songs to maximize stimulation and engagement. It’s not correct that “if you’re sad, we’ll show happy songs, or, if you’re angry, we’ll give you joyous music.” Fundamentally, the tech is built to maximize cognitive engagement per user. So if you’re sad, you might very well prefer an emotional playlist that amplifies the emotions of that sadness, as I do. That’s what I will continue to receive so long as my brain continues to emit signals of satisfaction for those frequencies. You can also program the app, as I might, that “hey, whenever you sense that I’m in my emotions, I prefer deep Drake songs.” We can then reflect that command. So this is self-learning.
      • Use case #3: Mental Highlighting. Auto bookmarking, saving and archiving of content (auditory or text-based) based on acute or sustained attention spikes. Remove need to manually highlight content.
  • Will enable SDKs for future expansion (new apps, integrations) by devs leading to eventual vision for app ecosystem

The end result is completely “neuro-personalized” content experiences.

Our product roadmap includes dozens of other related use cases — some of which are a lot more futuristic, and should not be disclosed quite yet.

We know our DeepTech project is complex (and very conceptual), but this is how it works. I hope this post helps shed light on what we’re doing beneath the surface and helps you feel more comfortable about the safety, efficacy, and reliability of the product. Let us know any questions. We’re here to help.

Validation Studies Show In-Ear EEG Outperforms Brainwave Cap, AI Bot is Superior [New Research]

“Imagine your brain is like a busy city,” Razin Kamali, my cofounder, told me. “What we’re trying to do is see what’s happening in all of the different neighborhoods.”

We were discussing the science behind our BCI tech, specifically the results of two experiments we just ran.

“The in-ear EEG is like a tiny spy that sits in one neighborhood,” she shared, “while a scalp-based cap is like a detective that watches the entire city.”

The results of our validation studies proved that, as she puts it, “these two ‘spies’ (in-ear EEG and scalp-based EEG) are both very good at telling us the same story, especially when we’re just relaxing with our eyes open or closed. It’s like they’re both watching and reporting on the same events in the city.”

This was a powerful finding because it validated our hypothesis that slower brainwaves should be able to be picked up equally well by smaller, bi-nodal receptors as a typical multi-nodal cap.

“Now, think of graphs as maps that help us understand what’s happening [in the city],” she told me. “The graphs here show that when we look at a specific spot in the city (let’s call it T4 on the scalp or F3 on the forehead), the in-ear spy sees very similar things as the detective [with all the resources and surveillance cameras] on the scalp. It’s like two different cameras capturing the same view.”

Razin also said that “when we remove some distracting things like people blinking or moving their eyes, the in-ear spy still sees things the same way as the detective, especially at certain spots (like F3 and its matching in-ear spot).”

That means even if certain features are missed, others won’t be affected, which could have big implications for us in terms of what data we need — meaning we might not actually require all of someone’s brainwave spectrum to paint an accurate picture and correlate it with their app experience.

“In simple terms, it means these two methods of watching brain activity ‘agree’ a lot, and the in-ear spy is a cool, easier way to keep an eye on what’s happening in someone’s ‘brain neighborhoods.'”

As a non-technical person, this metaphorical breakdown really helped paint the picture for me of how our tech is going to work, along with the value prop we’ll offer. We’re not just condensing form factor, we’re reducing the need to collect excess data and we’re improving the end user CX simultaneously.

If you are more technical and want to see the results of our validation experiments, keep reading.

What Our Two Studies Showed

In November, we conducted two experiments: one to evaluate the effectiveness of our in-ear form factor compared to a standard full-scalp cap and another to trial our beta chatbot. There were two major revelations:

  • First, our AI chatbot performed better due to the EEG feedback it received, making it more than just an ordinary AI chatbot.
  • Secondly, the prototype of our in-ear EEG earbud revealed a strong correlation to full-scalp readings, especially in scenarios like recording during eyes-open resting states.

AI wellness chatbot trial

The chatbot study was a single pilot experiment (on 6 subjects) where participants engaged in conversations with a chatbot while simultaneously recording (in-ear and on-scalp) EEG. This study measured the effectiveness of connecting real-time EEG feedback to an LLM-powered wellness chatbot. Here’s our findings:

Interestingly, the group that provided their brain feedback to the chatbot reported feeling better compared to the group that didn’t. This highlights the positive impact of integrating personal brain feedback into the chatbot interaction.

In-ear EEG validation experiment

We conducted another test to measure the effectiveness of in-ear receptors versus a brainwave cap. Here’s those results:

This test showed similar patterns between the T4 channel on the scalp and the same-side ear, as well as a matching pattern between F3 and the corresponding in-ear electrode, excluding eye movement and blinking artifacts. This indicates that the in-ear EEG captures brain activity similarly to traditional scalp EEG, potentially offering a more convenient monitoring option.

For this study, we did conduct the data transfer via wires (connected to the earbuds prototype) but obviously will be shifting this process to Bluetooth as we build out our MVP.

Our Product Vision

We know building great BCI hardware is difficult but we’ve also already done it (in a slightly different form factor for our Phase I). We’ve spent the last four years building our brain-sensing hardware and software suite (for clinicians) so the task now is to transform it to earbuds. These two studies are encouraging as we continued to refine both the measurement component (earbuds) and analysis component (algorithm and chatbot/app).

We do not see ourselves as a hardware company. We’re preparing to sell “brainwave-assisted mental wellness,” or, said another way, “neuro-personalized digital experiences.”

Right now, we cannot do that unless we build state-of-the-art hardware. But in the course of our journey, if we find an opportunity to pivot or distribute that experience in a better way, we’ll pursue it. Right now, building our own earbuds is our best and most economical way to push that broader vision forward. But it’s the neuropersonalized digital experience we’re hyper-focused on bringing to life.

We’re also designing our algorithms in a hardware-agnostic way so they’ll be able to act as an enabler in the future as the mounting hype around BCI takes off. If a licensing play emerges down the line, we’ll look at it. If we find a better or complimentary delivery method, we’ll look at it.

But the studies recently executed and shared here are a monumental milestone in our march forward. We hope this article offers a bit more context into how our BCI works and what we’re planning for the future.