Blog

Distinguishing EEG Signals from Other Frequencies

Our physical bodies are fascinating, functioning as sources of several different kinds of non-visible electromagnetic activities.

For instance, when you visit a doctor, and they check your heart using an ECG (electrocardiogram), they are measuring the electrical activity of your heart. Similarly, our brain’s activity can be captured through what’s known as EEG (electroencephalography) signals.

EEG signals are essentially the ‘brainwaves’ that are produced by the electrical activities occurring in our brain. These signals are crucial for understanding different cognitive states. For example, certain EEG patterns can indicate whether a person is awake, asleep, or even in deep meditation.

EEG vs. ECG

It’s interesting to note that while the heart’s ECG signals are stronger and have different patterns (like the familiar heartbeat pattern seen in medical dramas), EEG signals are unique to brain activity. They are more subtle and complex, reflecting the intricate workings of the brain.

Figure 1- Comparative Amplitudes: This side-by-side illustration shows the difference in electrical signal strength between EEG and ECG recordings. The EEG (down) peaks at 0.02 mV, showcasing the subtle electrical activity of the brain, while the ECG (up) reaches up to 2 mV , reflecting the heart’s more robust electrical activity.

 The Body’s Electrical and Chemical Signals

Our body operates through both electrical and chemical signals. For example, when you feel happy, chemicals like dopamine are released in your brain. This can change your EEG patterns, reflecting your altered mood. Similarly, if someone consumes alcohol, this chemical change in the brain can also be detected through changes in EEG patterns.

Another example: Imagine someone on a Netflix binge, deeply engrossed in a gripping series. As they experience various emotions – excitement, suspense, joy – their brain’s electrical activity changes. These changes in EEG patterns can be indicative of their engagement level and emotional responses, showcasing the dynamic nature of our brain’s electrical activity in response to external stimuli.
Mosh and I had an intriguing conversation last week in Toronto about the types of EEG data we can detect and the inferences that can be drawn from them. We discussed how EEG data, altered by factors like emotional responses to a movie or the effects of substances like alcohol, can provide insights into a person’s cognitive state. However, it’s crucial to note that while EEG can offer a window into brain activity, it still does not allow us to read specific thoughts.

Detecting EEG Signals

While EEG signals are primarily detected on the scalp skin or in ear, advanced techniques can sometimes detect these signals a very short distance away from the scalp. However, the idea of reading specific thoughts or psychic communication through EEG is currently not supported by scientific evidence. EEG provides a general picture of brain activity, not a detailed ‘read-out’ of thoughts.

Figure 2- Quantitative EEG Task Response: Slower and Weaker Brain Activity in Alcoholics Compared to Non-Alcoholics.

Individuality of EEG Signals

We received a question the other day about signals overlapping if there are several individuals together in a room and they all have their earbuds in – would that pose a problem.

In environments where multiple individuals are present, each person’s EEG signals remain unique to them and do not overlap. This is because EEG requires close proximity sensors (in-ear or over scalp), for accurate detection.

So, if a group of people in a room are all wearing EEG headsets or in-ear EEG earbuds, their brainwave data would remain distinct and separate.

We welcome questions about EEG and other types of bodily signals. Our goal is to help clarify these complex topics and make them understandable, even for those who are new to neuroscience.

Science Behind the Tech: How our EEG-Sensing Earbuds Work

In this post, I’m going to explain more about how the earbud-to-device system works.

There are really four pieces to the apparatus, if you will — brainwaves to earbuds to processing algorithm back to device.

EEG → EARBUDS HOST DEVICE PROCESSING HOST DEVICE (APP)

At a high level: analog EEG signals are picked up by our flexible sensor from the ear canal. An EEG module in the earbuds sifts through the noises and deciphers the electrical brain signals to convert them to data. The data passes to the Bluetooth module where it’s transmitted to a host device (usually a smartphone) and the host completes a processing sequence to send a resultant signal to our servers (based on the app’s embedded instructions) which is processed by our proprietary machine learning algorithms. Results come back to the host and are reflected to the user.

Now, in more detail.

1. Naturally-Occurring Brainwave Data

Here’s how brainwave data occurs:

  • Electrical signals are continuously firing inside the brain’s neural network, autonomously and irrespective of conscious intent. Each type of signal has a very specific features which we can measure.
    • Performance: Memory, Bandwidth/Cognitive Load, Attention (Sustained, Divided, Executive, Selective)
    • Reasoning: Deductive, Inductive
    • Thinking: Fast, Slow
    • Sensory: Visual, Auditory
    • Feeling: Fear, Love, Happy, Sad (more sustained state-of-being)
    • Mood: Nervous, Anxious, Calm, Angry Frustrated (more acute, sharp emotion)

2. Data Recorded via yourBuds

We’ve gotten some questions around radiation, safety, and whether signals pass through the brain. The answer is that, like all Bluetooth devices, ours will emit low levels of non-ionizing radiation. Exposure to low amounts of this type is standard and not harmful to humans. Also, signals do not pass through the brain at all. We’re merely reading and recording the brainwaves, exactly like a neurofeedback test with the electrode cap — we’re just condensing that same technology from a multi-nodal scalp to in-ear.

» Related: Validation Study Shows In-Ear EEG Outperforms Brainwave Cap

Here’s how the earbuds work:

  • Dual-channel EEG reception (one earbud in each ear)
  • Conductive rubber nano-composite acting as soft in-ear component (looks similar to your typical silicone ear tips, with different material)
    • Connected to EEG sensor and the PCB where analog and digital ICs are located
  • Microprocessor also serves the audio related functionality
  • At the center of rubber nano-composite is audio module consisting of a speaker, microphone and vibration sensor
    • All packaged in a vibration-damping enclosure

3. Data Transmitted to Host Device

Here’s how we automate the processing, interpretation and integration of individuals’ EEG data:

  • After brief denoising, signals are transmitted to microprocessor, sent via Bluetooth to the host device (smartphone).
  • In the host device, signals are received, and processed and vectorized (to decrease the packet size) before being sent over the Internet to our servers for cloud processing (end-to-end encrypted)

4. Data Sent to Servers for Processing, Indexing, Classification

Here’s how we process and make sense of the data to prepare it for use:

  • Processes yield to real time emotional classification, attentional level indexing, memory workload
    • Indexing is based on established research in the field, and has become possible using our proprietary AI for organizing, classifying, and predicting EEG signals with high accuracy in real time
    • Results then streamed back to host device
  • We plan to extend cognitive domains and incorporate more features to form a 360-degree cognitive profile to expand capabilities to entertainment, learning, and more

5. Algorithm Reflects Outputs as Experiences in yourSpace

Here’s how the algorithm and mobile interface uses the information streamed back from our servers, and how it work on an individual basis:

  • The app, still being built, will house three native apps (built by neocore) corresponding with three discrete use cases
    • User can also enter preferences into a preference center and program or instruct the app of an overall objective if they have one
    • Algorithm is constantly pulling in this real-time emotion, attention and mood data
      • RL algorithm as control layer which is in charge of how to use these techniques and what combination works best for each user
        • RL agent improves as individual users continue using with time
  • Will leverage APIs to other streaming apps (generative music, podcast, audiobook summaries, etc.) for data sharing (meaning you can do SSO from yourSpace to these apps)
    • With earbuds in, app integrations in place, permissions set, and users allowing app to act when prompted, custom device experiences can be had across use cases
      • Use case #1: Audio Augmentation. Dynamic rephrasing, recombining and regenerating of (monologue-only) podcast episodes to maximize attention. Includes autonomous discarding/elimination of excess, segment re-ordering, and imperceptible transitions (using GenAI to mimic podcasters voice to bridge the quick gaps)
      • Use case #2: Adaptive Music. Mood-based playlists which dynamically populate in real time, bringing in best-fit songs to maximize stimulation and engagement. It’s not correct that “if you’re sad, we’ll show happy songs, or, if you’re angry, we’ll give you joyous music.” Fundamentally, the tech is built to maximize cognitive engagement per user. So if you’re sad, you might very well prefer an emotional playlist that amplifies the emotions of that sadness, as I do. That’s what I will continue to receive so long as my brain continues to emit signals of satisfaction for those frequencies. You can also program the app, as I might, that “hey, whenever you sense that I’m in my emotions, I prefer deep Drake songs.” We can then reflect that command. So this is self-learning.
      • Use case #3: Mental Highlighting. Auto bookmarking, saving and archiving of content (auditory or text-based) based on acute or sustained attention spikes. Remove need to manually highlight content.
  • Will enable SDKs for future expansion (new apps, integrations) by devs leading to eventual vision for app ecosystem

The end result is completely “neuro-personalized” content experiences.

Our product roadmap includes dozens of other related use cases — some of which are a lot more futuristic, and should not be disclosed quite yet.

We know our DeepTech project is complex (and very conceptual), but this is how it works. I hope this post helps shed light on what we’re doing beneath the surface and helps you feel more comfortable about the safety, efficacy, and reliability of the product. Let us know any questions. We’re here to help.

Addressing the Top 5 FAQs About Neocore’s BCI (For Investors)

In our early conversations as we prepare to raise our pre-seed investment round, there are five main questions we’ve continued to get. I’d like to address those here.

#1. Are there FDA or regulatory concerns? What about liability and litigation from consumers?

No. Since we’re not deploying into medical environments, making diagnoses, nor sending a signal through the brain (only reading existing brainwaves), we’re clear from a regulatory perspective. Our early adopters will be avid wellness tech enthusiasts who are more interested in getting their hands on the hottest new thing than jumping to sue if something isn’t perfectly right. We’ll need to include liability disclosure in “the small print” of course, and we already have corporate counsel ready to go on the sidelines who will connect us to the right tech-specific firm to handle this.

#2. How will you go up against big tech players already patenting and building comparable BCI?

We don’t see any of The Big 5 as direct competitors as much as potential acquirers down the line. Our form factor itself isn’t as defensible as what is does. Our moat is really our proprietary AI algorithm. Moreover, it’s our system–“brainwave to tech to content” for a wellness application. That capability will be hard for anyone to duplicate especially given our experience being in the 100+ clinics we’re in now, using that data to inform our app, and having the wellness background we do. Any innovation in the category is a net positive for us.

#3. Why would a consumer buy your earbuds versus regular ones and wellness apps they already use?

Existing earbuds and wellness apps aren’t cognitively connected. And, these are separate products. Sure, apps like Calm, Headspace, or even YouTube adapt based on how you use them. But they don’t operate based on how you think–on what’s going on inside of the brain. That’s our big value prop: using your brainwave data to guide your app experience. If you just want regular old earbuds, go for it. Our earbuds are merely a conduit to enabling a completely “neuropersonalized” device experience.

#4. Making and selling hardware is a difficult road. Why not stick to (or start with) software?

We know scaling the hardware piece will require tact and smart execution. Two things here: one, we are not a hardware company. We’ll sell hardware, but the value is in the experience of customizing your device interaction based on EEG data. That’s our focus. But we need hardware to do that. Two: we’re not set in stone on the hardware piece. We’re open to pivoting and innovating away from hardware if we find we need to. Our vision is to create an ecosystem of products, not to be beholden to earbuds. We can license either software or hardware, sell the rights to the hardware side of the business, or eventually evolve to not needing the hardware component in the future. But for now, it’s the most economical, efficient, minimalist form factor to enable our application.

#5. You guys already have traction with clinics. Why not stick to B2B? Why pivot to consumer?

The market cap is limited for B2B–there’s only so many psychotherapy clinics doing so many EEG tests per year (an active one might perform 6,000 in a year). But there’s virtually no ceiling on growth potential for consumer wellness. By 2030, that market will be worth $7 trillion. Numbers alone support our decision to go the B2C route. Eventually, BCI will become mainstream. It may take 10-12-15 years to get there but a technology this pioneering and innovative will become mass adopted on a global scale just like the iPhone did. We’re aiming to lead that charge–we can built it, we can ship it, and we can market it.

Please reach out to me directly with any further questions or clarification! I am here to help! Stay tuned and subscribe for more!

~ Michael

Validation Studies Show In-Ear EEG Outperforms Brainwave Cap, AI Bot is Superior [New Research]

“Imagine your brain is like a busy city,” Razin Kamali, my cofounder, told me. “What we’re trying to do is see what’s happening in all of the different neighborhoods.”

We were discussing the science behind our BCI tech, specifically the results of two experiments we just ran.

“The in-ear EEG is like a tiny spy that sits in one neighborhood,” she shared, “while a scalp-based cap is like a detective that watches the entire city.”

The results of our validation studies proved that, as she puts it, “these two ‘spies’ (in-ear EEG and scalp-based EEG) are both very good at telling us the same story, especially when we’re just relaxing with our eyes open or closed. It’s like they’re both watching and reporting on the same events in the city.”

This was a powerful finding because it validated our hypothesis that slower brainwaves should be able to be picked up equally well by smaller, bi-nodal receptors as a typical multi-nodal cap.

“Now, think of graphs as maps that help us understand what’s happening [in the city],” she told me. “The graphs here show that when we look at a specific spot in the city (let’s call it T4 on the scalp or F3 on the forehead), the in-ear spy sees very similar things as the detective [with all the resources and surveillance cameras] on the scalp. It’s like two different cameras capturing the same view.”

Razin also said that “when we remove some distracting things like people blinking or moving their eyes, the in-ear spy still sees things the same way as the detective, especially at certain spots (like F3 and its matching in-ear spot).”

That means even if certain features are missed, others won’t be affected, which could have big implications for us in terms of what data we need — meaning we might not actually require all of someone’s brainwave spectrum to paint an accurate picture and correlate it with their app experience.

“In simple terms, it means these two methods of watching brain activity ‘agree’ a lot, and the in-ear spy is a cool, easier way to keep an eye on what’s happening in someone’s ‘brain neighborhoods.'”

As a non-technical person, this metaphorical breakdown really helped paint the picture for me of how our tech is going to work, along with the value prop we’ll offer. We’re not just condensing form factor, we’re reducing the need to collect excess data and we’re improving the end user CX simultaneously.

If you are more technical and want to see the results of our validation experiments, keep reading.

What Our Two Studies Showed

In November, we conducted two experiments: one to evaluate the effectiveness of our in-ear form factor compared to a standard full-scalp cap and another to trial our beta chatbot. There were two major revelations:

  • First, our AI chatbot performed better due to the EEG feedback it received, making it more than just an ordinary AI chatbot.
  • Secondly, the prototype of our in-ear EEG earbud revealed a strong correlation to full-scalp readings, especially in scenarios like recording during eyes-open resting states.

AI wellness chatbot trial

The chatbot study was a single pilot experiment (on 6 subjects) where participants engaged in conversations with a chatbot while simultaneously recording (in-ear and on-scalp) EEG. This study measured the effectiveness of connecting real-time EEG feedback to an LLM-powered wellness chatbot. Here’s our findings:

Interestingly, the group that provided their brain feedback to the chatbot reported feeling better compared to the group that didn’t. This highlights the positive impact of integrating personal brain feedback into the chatbot interaction.

In-ear EEG validation experiment

We conducted another test to measure the effectiveness of in-ear receptors versus a brainwave cap. Here’s those results:

This test showed similar patterns between the T4 channel on the scalp and the same-side ear, as well as a matching pattern between F3 and the corresponding in-ear electrode, excluding eye movement and blinking artifacts. This indicates that the in-ear EEG captures brain activity similarly to traditional scalp EEG, potentially offering a more convenient monitoring option.

For this study, we did conduct the data transfer via wires (connected to the earbuds prototype) but obviously will be shifting this process to Bluetooth as we build out our MVP.

Our Product Vision

We know building great BCI hardware is difficult but we’ve also already done it (in a slightly different form factor for our Phase I). We’ve spent the last four years building our brain-sensing hardware and software suite (for clinicians) so the task now is to transform it to earbuds. These two studies are encouraging as we continued to refine both the measurement component (earbuds) and analysis component (algorithm and chatbot/app).

We do not see ourselves as a hardware company. We’re preparing to sell “brainwave-assisted mental wellness,” or, said another way, “neuro-personalized digital experiences.”

Right now, we cannot do that unless we build state-of-the-art hardware. But in the course of our journey, if we find an opportunity to pivot or distribute that experience in a better way, we’ll pursue it. Right now, building our own earbuds is our best and most economical way to push that broader vision forward. But it’s the neuropersonalized digital experience we’re hyper-focused on bringing to life.

We’re also designing our algorithms in a hardware-agnostic way so they’ll be able to act as an enabler in the future as the mounting hype around BCI takes off. If a licensing play emerges down the line, we’ll look at it. If we find a better or complimentary delivery method, we’ll look at it.

But the studies recently executed and shared here are a monumental milestone in our march forward. We hope this article offers a bit more context into how our BCI works and what we’re planning for the future.


Reflections on the Future of BCI & The Technological Singularity

The concept of “the technological singularity” was first described by Vernor Vinge 20 years ago in 1993.

Futurist Ray Kurzweil later popularized and expanded upon it in a more modern context over the next couple decades.

Our philosophy on the mergence of technology and humanity is simple: it’s going to happen, regardless of how we, as a race and as individuals, feel about it. And it won’t really be something you can “opt out” of, at least not on the long tail.

The only question, then, is “how will it happen?”

I used to be skeptical about human-computer interfacing because of the lens I had on a few years back when I first started learning about it in the context of transhumanism and the “negative” agenda to turn people into robots that I was reading about in David Icke books.

But then I realized that was only one angle. One possibility. And, if we and other benevolent players are successful, an improbability.

Part of our mission is to help spearhead this mounting tidal wave in a positive-sum, additive, benevolent way.

As I awakened more deeply to the beautiful ways that smart technology can and already is helping people across almost every aspect of life, including to improve cognition and wellness, I realized something powerful: it is our destiny to evolve with smart technology. This is quite literally how we will take our next evolution as a species.

Technology like BCI will be an incredible magnifier of human intellect, an amplifier of cognition and force multiplier of our inborn capacities to create, perform, and thrive.

As I wrote in an article on my old blog:

Humanity is in the midst of making a massive quantum leap forward that will mark our transition from a primitive, Earthbound, humanoid race to a technologically-driven species capable of innovations orders of magnitude greater than anything we’ve imagined thus far.

Brain-computer interfaces are one subset of that evolution.

In-and-of itself, connecting cognition to technology is not a bad or dangerous or risky thing to do. It’s actually an amazing and bold step forward that, in my view, will be much more transformative than landing on the moon or inventing flight or maybe even the invention of the light bulb was.

If you could take a computer – which is inherently neutral and hundreds of times “smarter” than humans are – and train it to partner with you, almost as a digital twin, so it’s completely in service to you, and connect it to your “quantified self” (your biological and psychological make-up), we’re talking about something than can only be additive if leveraged without contamination or alteration by some outside thing.

This is what the best BCIs can do.

NeuraLink and Synchron are to be applauded for their work but invasive implants just aren’t right for nor needed by the masses just yet. But they can obviously be life-saving for people who need them and are suffering from conditions like paralysis or loss of limbs. Think about having your movement restored after a horrific accident or suddenly being able to communicate your thoughts on a screen after years of being unable to communicate. In those cases, these technologies are likely seen as an absolute Godsend. To others on the very far end of the spectrum, they are a scary entry point to a very uncertain future that involves a desensitized population willing to surgically alter their minds at the drop of a dime, get microchips installed like buying a new outfit at the mall, and be mind-controlled by BCIs run by corporations who’s agendas are iffy.

So much of how you look at not just brain-computer interfacing but mergence with tech and AI in general depends on your perspective.

Just because the Metaverse will exist one day doesn’t mean you have to go into it. Just because certain human-computer technologies will soon hit the market doesn’t mean they’re necessarily “right” for everybody. We should use our own judgment.

So there has to be a certain level of willingness, both collectively, and personally, I think, where we’re going to have to say either “no, I’m just not ready to hook anything up to my body or brain that’s not already biologically a part of myself,” or “you know what, let me see, let me wait and see what happens, I can imagine how technologies like this might work, might be able to help me live a better life, and why would I shut myself off to something like that if it exists?”

In terms of when the singularity is going to happen, Kurzweil postulates it’ll be here by 2045. So, 20 more years, give or take. In terms of how it’ll happen, I see it as three layers or three steps:

  1. Body-to-tech (wearables) – already here, been here for a while (FitBit, Apple Watch, etc)
  2. Brain-to-computer (BCIs) – already here, still in its infancy, very much a frontier market
  3. Consciousness-to-computer – not yet here, but being worked on (memory storage and transfer, uploading human data to computers, etc)

Our mission, of course, is to participate and usher in the future of “step #2” with safe, non-invasive products that are specifically geared for optimization of mental wellbeing.

Our vision is really to connect cognition to content consumption. We want to, and will, make it possible to receive totally tailored device experiences based on your brainwave data and what it indicates you need for maximal performance.

As long as your earbuds are in, they’ll be able to pick up dozens of what are called “features” which are different types of brainwave patterns related to all sorts of dimensions of thought: intentions, desires, inhibitions, emotions, cognitive bandwidth, mood, level of wakefulness, and a lot more. Then they’ll be able to communicate that data to your phone which will have our app, which will be driven by AI, which will receive that data, decipher it, use it to accurately present and predict exactly what you need based on your underlying goal which you will have told the app already.

So, there’s going to be all sorts of really narrow use cases for BCI, and we’re already seeing this in the space. There are firms focusing on sleep enhancement and focus for gamers and epilepsy control and all of these things. But our focus is predictive mental wellbeing, or what I sometimes describe as cognitively-guided device optimization.

To us, this is one of the most important areas of focus as the singularity nears. As tech continues to become more interwoven into all of our lives (and it’s not going away, it’s only getting stronger), we stand at a precipice now where we’re either going to become entirely dependent and addicted to it (which some people already are who we hope to help) or where we’re going to learn to leverage it as our servant instead of our master, which we hope to help push forth.

At any rate, I hope this post sheds a little light into my thinking around all of these topics. There’s certainly a lot more I could go into here but that’ll be for future articles.

Be sure to subscribe for future updates and insights as we prepare to launch in less than a year!

-Michael

Neocore Team Attends GITEX North Star in Dubai to Exhibit & Pitch New Venture

In November, our founding team had an amazing opportunity to travel to Dubai to attend, exhibit, and pitch our venture at the largest startup event in the world, GITEX’s annual North Star.

Since deeptech is the next big frontier, we weren’t surprised to meet a dozen or so other neurotech firms all doing innovative things in related fields, which further proved the efficacy of the space as a “next up” technology. This was good to see on an international scale.

This was our first in-person event as a founding team which made it all the more exciting, as well. This was also our first time revealing our “Phase II” to the world which is the consumer focus.

Almost all of the startups in attendance we tech firms. We’re incredible excited to be on the. very forefront of this mounting tidal wave that is human-computer interfacing.

One of the things we discussed as a team was the evolution of technology over the past decade and how the First Wave (Web 1.0) led into the Second Wave (Web 2.0). We’re noticing an acceleration in terms of both speed and capability, though, as the Third Wave continues to mature which will rapidly lead into the next wave, The Fourth Wave, which will be characterized by deep AI connected to BCI which will mark a major step-function change to the AI-driven technologies consumers are more or less used to by now.

Siri, Alexa, GPT, Cortana, voice search… these are all great, and they set the stage. But they require actively prompting the tech in order to get a desired result.

We don’t. The big value point with BCI is that it removes the work. You don’t need to think about it or do anything and it still works.

The wellness angle is extremely intriguing from this perspective. Let’s say I’m doing a dopamine detox. I’d program my chat-enabled app with my objective one time. Then I’d let it get to work. It may ask me if it can silence certain notifications and restrict access to Netflix and Facebook if it knows I’m over-stimulated using those apps. I’d simply tap “yes.” Then it may create a compilation of music frequencies specifically geared to suppress stimulation in the part of my brain associated with dopamine release to help me rebalance. Again, I tap “Listen” and there it is.

It’ll never override or dictate what I choose to think or do, always asking for approval to implement recommendations or to alter my feed. What’s cool, too, is that BCI tech like ours can filter through every single naturally-occurring brainwave pattern, feature, and amplitude… all of which have distinct meaning.

One of the “catch phrases” we came up with at the event that really stuck with us was, “imagine having complete cognitive control of your digital world.” That’s exactly what we’re building, and will be offering to the world, at scale.

Stay tuned for more by subscribing. Much more to come!

~ Michael

Update from Antler Dubai: Live Pitching, Iterating, Solidifying our Vision

We’ve been in Dubai for a month working on our business.

Over the course of the previous four weeks in Antler, we’ve made tremendous progress. We’ve:

  • Refined our go-to-market and product plan, challenging and evolving on the hypotheses we had when we arrived
  • Conducted in-depth user research including 10 qualitative interviews plus creation of a conjoint analysis to better understand customer preferences
  • Meticulously refined our pitch and pitch deck

Last week, we presented in front of the entire cohort and the partners for the second time, revealing more about how we’ll meet the incredible opportunity in consumer EEG.

Mosh and I practicing our pitch

Without giving away too much, the biggest opportunity we see is in digital content.

Aside from business planning, participating in Antler has brought us closer together as a team.

We’ve gotten to have productive team sessions multiple times per day, diving deep on key topics on a daily basis. I’ve been amazed at how well we gel together and how much momentum we create as a cofounding unit.

Our focus the past week has been on understanding our customer avatar, refining our anticipated COGS, developing our business model and anticipated pricing strategy, beginning on brand and messaging work, and, again, refining the pitch.

Razin and I at work

With myself focused on GTM, growth and business, Mosh meticulously focused on vision, product and strategy, and Razin heads-down on the neuroscience, research and user testing, we have the perfect team to spearhead and scale the business.

We’re almost halfway through the Antler cohort and looking forward to the next month-and-a-half.