Can Listening to AI-Composed Music Boost Your Brain Function?

'Our mission is to solve ADD, anxiety and insomnia via audio brainwave training'

The Blind Robot / Louis-Philippe Demers The Blind Robot is a direct reference to the works of Merleau-Ponty and his example of the body extension of the blind man’s cane. When Feenberg analyzes Merleau-Ponty’s blind man’s cane, he concludes that the cane does more than sense the world; it also reveals the blind man as blind. It is this very aspect that the Blind Robot exploits: to create an empathic situation and a positive predisposition to the engagement.
Let the robot reach you.

As I write this, I’m listening to music written and performed by a robot that was built by to help me focus. I think it might be working, giving me a tiny buzz. This surprises no one more than me. It could of course be a placebo effect. It probably is. In fact, the fact that I even wrote that last three-word sentence should tell you something about the attitude in which I approached’s offering: skeptically. is a new audio startup out of Chicago that produces music written by artificial intelligence that promises to get your brain into one of several desired states, from deep sleep to focused work. “Music is something that’s so powerful and it goes through all the brain,” Adam Hewett, a co-founder of the company, told the Observer. “We’ve been missing out on this opportunity for many years.”  

Hewett has been a brain entrepreneur since 2003. First, he built equipment to help researchers test the way humans get attuned with rhythms. He has now built an artificial intelligence that will make layered electronic music at a constant beat in order to generate desirable mental states. “What the AI does, is it aligns every single note and drumbeat to the rhythm we are trying to stimulate,” Mr. Hewett explained.

We recently reported on another artificial intelligence that composed a whole musical score, but that was an artistic project.’s virtual Mozart has an agenda: to set a mood for you as surely as an Instagram filter.  

Users get seven free sessions of listening to Brain.FM music before they have to pay to keep going. They enter the site, pick the state they want (I’m writing this using the intense focus mode; the company has made another one for more casual focus, such as for reading or drawing) and then hit play.

“Our mission is to solve ADD, anxiety and insomnia via audio brainwave training,” the website attests. “ is the first big step.”

After I started the company’s music stream for workers, for the first few minutes I was still bopping around the internet, distracted by things I’d forgotten to do, wanted to do or just wanted to “check.” I thought: well, this music is perfectly fine, but it’s not doing anything.

Then, I’ll confess, it started to hit me. I felt my chest a little more acutely, as if my heartbeat were up. It’s the feeling I used to get from coffee, long ago, before the thrill had gone. I’ve still been working in my scattered fashion, but I feel just a bit more elevated. After a session, users can give the system feedback on how well it helped them to hit their target state. Hewett said the site has built lots of ways for it to deliver better music as users let it know how it’s doing.

Hewett said works by targeting our brain’s natural tendency to lock in on rhythms. Different rhythms tend to elicit different brain wave responses (measured using electroencephalography). Scientists know, for example, that alpha waves tend to correspond to relaxation and beta and gamma waves tend to correspond to focus. So, the thinking goes that if music can tend to generate more brainwaves associated with a desired state, then that must mean that we have attained that desired state.

Scientists have been looking at the way our brains get in sync with music for a while, which they call entrainment. The company sent a bunch of papers from various researchers that they argue supports’s strategy for attaining certain moods. Lots of it reports on testing people (often in the dozens) at different rhythm related tasks when they are listening to rhythmic music.

Similarly, I also found this recent investigation by lead researcher Dr. Sylvie Nozaradan, a neuroscience postdoc at the the Université Catholique de Louvain, in Belgium, in which participants were asked to demonstrate their understanding of two different rhythms. “In both cases, the participants had to synchronize their tapping movement to the sequences of sounds,” she explained in an email, “to be able to tap in sync on each sound instead of just following each sound by a tap.”

Nozaradan’s research (one of her collaborators wrote a plain English report on it at The Conversation) found a correlation between the degree to which a subject’s brainwaves responded and how well they were able to match finger tapping with the music’s beat. Does that necessarily mean that listening to music attuned to certain beats will help you finish your TPS reports? I don’t think we know that yet.

Nozaradan didn’t comment on the applicability of her research to commercial applications.

It’s also worth pointing out that not everyone’s brains got in sync to the music as well in her study, which may also indicate that your mileage may vary.

However, Hewett contended that by hitting lots of frequencies, its music can get more people in the groove. “What we’re trying to do is keep people focused, and the reason we stimulate different frequencies to do that in the beta range,” Hewett said, “is because the brain is complex. You can’t stimulate one frequency and say, ‘Okay, this is going to work for everybody.'”

To keep the music interesting, the AI overlays that fundamental beat with other sounds ranging from orchestral to straight up electronica. “We don’t want to draw attention but we don’t want you to tune it out either,” Hewett, who used to compose the music himself, manually, said, so he’s added a number of tricks to the AI that vary the sound while keeping users’ minds on goal. One particularly subtle strategy he’s added to’s repertoire: 3D sound.

Humans can perceive the location of sound, which is why you instinctively know just how far to turn when you hear someone walking up behind you. The Observer has previously reported on the importance of 3D sound to making believable virtual reality experiences and why recordings made in 3D have so much more meaning to blind people.

So Mr. Hewett made’s music spatial. Using a virtual system he developed, your brain will perceive the music as located in space, like you would a banjo player standing off to your left on the subway platform; however, if the banjo player were Brain.FM, he would slowly, slowly walk around you while he played. It sounds a little creepy when put that way, but the point is to create some subtle variety that keeps you listening.

That should, he hopes, help fend off the user becoming too habituated to the rhythms, but I’m looking forward now to testing how well it enables me to tune out everything. has another channel built to help with sleep, which the co-founders contend delivers dramatically deeper rest. I hope to be reporting on that function dead-to-the-world this evening.

See also:

Can Listening to AI-Composed Music Boost Your Brain Function?