Piano Roll :


Tonality :


Emotion wheel (unplugged):


Happy
Grumpy
Dopey
Sneezy
Sleepy
Bashful
Doc

Genre (unplugged):


{{genre}}

Scroll down the page for more explanations






If no sound in above video : View on YouTube

Wisteria GistNoesis : personal music tutor


What do I see?


At the top there is the log-spectogram, which shows at which frequencies is the audio energy.

At the bottom there is a piano roller computed by a neural network.

In the spectogram a single note correspond to multiple lines (the harmonics), the neural network has learned to identify the notes from the spectrogram.

You can change the neural network to change the piano roller visualization.

You can change the tonality, (which will soon be inferred automatically), and it will change the color map.

Helps you learn music by providing real-time semantic visualisations.


You can see the brain of a bot which has spent multiple lifetimes listening to music.

By giving you real-time feedback, it helps you develop your musical hearing and see the musical motives.

It is often hard for the beginner to hear himself play, but now you can see how well you played.

Through visual gamification, it will help you learn.

Improve your play


Whether you play the piano, the violin, the flute or any other instrument, it will bring you useful info.

For the violin, it will show you when your notes have the right pitch.

For the flute, it will show you the various respiration artefacts.

For the piano, it will highlight the musical hierarchies.

It can monitor your errors, track your progress, adapt its way of teaching to suit you better.

Wisteria is powered by deep learning using tensorflow.js



Deep-learning brings superhuman performance into the realm of everyday life.

It's state of the art.

It gets better with more powerful computers.

Run inside your browser, No install necessary


Run on Chrome/Firefox/Edge, Ubuntu / Windows / Android / IOS.

Current performance on mobile devices will improve in the near future.

Customizable


Music can be seen and learned in a wide variety of ways, therefore a wide variety of visualisations will be available.

Absolute hearing (current visualization) vs relative hearing (coming soon) are one of the most known ways of listening.

Useful tool for teachers to help their students pinpoint their mistakes.

See the project on Wisteria GistNoesis github page to develop your networks, or run locally.

Customize your datasets to suit your needs.

Privacy conscious


Learning an instrument is an hard task, which takes time and effort.

It is exceedingly easy to manipulate your emotions, because as a teacher we are controlling the rewards.

We can monitor your state of mind, your personality by the way you play.

Because of all this, our source code is shared and can be run locally.

Monetization


Ads about instruments, or music lessons will have a high CTR (because of the time spent on page), and be valuable (because of the context).

For the time being, GistNoesis will rather take donations and spend time helping you to learn music than work on maximizing the cash value I can get out of your kids.

As always with GistNoesis projects, once the grace period is over, if the project hasn't reached its objectives, it will disappear and the technology behind will be put towards more fruitful objectives.

This art project is there to highlight that in AI, the biggest challenge isn't the technical side but the alignement of incentives.

Technology


The technology behind Wisteria is quite generic, and can be applied to a wide range of domain.
You can monitor your sleep, by detecting your respirations.
You can monitor the local fauna, by listening and identifying birds.
You can monitor suspicious noises in car engines.
You can do speech-recognition.
You can do voice identification.
You can build the future of offline advertising : Every morning you task humans to speak the right things to other humans, and have Wisteria validate the task completion and reward accordingly.

FAQ:


I don't see the same result as your video :


Even though we randomize the microphone frequency response, it's likely that your microphone or recording settings are different than what the bot was trained on. Try a different microphone, or come back soon when we have better models.

Your piano might be tuned differently, come back soon we have solutions almost ready for this problem.

You are using a mobile device and the result is garbage then it's probably due to a combination of microphone and 16-bits precision computations. So come back soon, or try a different device.

It lags :


Close some other tasks to liberate memory, and stop other cpu/gpu intensive tasks.
Memory usage shouldn't grow over time.
It usually needs some warm-up period after the loading of a new network to reach stable processing regime.
We might drop audio frames to stay in sync with real-time.
Use smaller and faster networks or a better computer.

I'm not impressed by the performance :


We have built the infrastructure and use the simplest models.
We are heavily constrained on the computation side.
We will stay behind (~one year) last research papers : our original know-how won't be published until equivalent ideas are, because certain ideas are widely applicable and shouldn't be wasted.

I'm impressed by the performance :


Don't forget to check the Wisteria GistNoesis github page for more technical details.


You can find the donate button there, as this page doesn't load external resources.

You can contact us at gistnoesis@gmail.com

Thanks :)