How was Synplant 2's ML trained?

Sprite557 views13 posts
  • Sprite

    Just curious because never seen anything like it before, did a lot of beta users have to submit meticulously tagged presets in order to train the new function?

  • Magnus Lidström

    No, it is trained on an endless corpus of algorithmically created sounds.

  • Sprite

    Thanks for responding Magnus, though I'm a little confused as to your wording, do you mean it was learning from procedurally generated samples? I wonder how something like that would get you a useful dataset, it's all a bit like sorcery to me if I'm honest lol

  • Magnus Lidström

    Well, I can't claim it is trained on only good sounds. 😉

    Genopatch is not an AI that generates sounds from nothing but tags or text input; it generates sounds that should match a reference sample. The neural net is trained to learn how the Synplant engine works so that it can make educated guesses on relevant parameter settings for a given input.

  • Nikita Urbansky

    - Magnus Lidström wrote:
    Well, I can't claim it is trained on only good sounds. 😉
    Genopatch is not an AI that generates sounds from nothing but tags or text input; it generates sounds that should match a reference sample. The neural net is trained to learn how the Synplant engine works so that it can make educated guesses on relevant parameter settings for a given input.

    Hello,

    do you have scientific article on the details of algorithms you have been using in Synthplant2? I’m making a research on the topic for my MA diploma and I would be proud to include the best product on the market with references on research papers!

  • Manuel Senfft

    - Magnus Lidström wrote:
    Genopatch is not an AI that generates sounds from nothing but tags or text input; it generates sounds that should match a reference sample.

    I produce music for an upcoming game. The client told me that if we in the team would use anything with AI, it could occur that we have to declare this and also provie that we rightfully own or be able to use the trained data set for the AI.

    Now I wonder (somehow "legally"): would you say that such a thing would apply, when I use Synplant v2 in the music production? I mean: would it be neccessary already to say that the music created with the synth is AI based, even if it's "only" about the created patch for the synth?

  • Rasmus Merten

    - Sprite wrote:
    Thanks for responding Magnus, though I'm a little confused as to your wording, do you mean it was learning from procedurally generated samples? I wonder how something like that would get you a useful dataset, it's all a bit like sorcery to me if I'm honest lol

    my guess would be, that you would use an autoencoder and the parameters of the synth is the reduced featurespace in the middle. in that way you would only have to use a sample and then compare it to the output of the encoder (synth recording). that way you don't need to build a fancy DS. not shure if it works though. :)

  • Fredrik Lidström

    - Manuel Senfft wrote:
    I produce music for an upcoming game. The client told me that if we in the team would use anything with AI, it could occur that we have to declare this and also provie that we rightfully own or be able to use the trained data set for the AI.
    Now I wonder (somehow "legally"): would you say that such a thing would apply, when I use Synplant v2 in the music production? I mean: would it be neccessary already to say that the music created with the synth is AI based, even if it's "only" about the created patch for the synth?

    No, that does not apply here. Legally, we own full rights to all the data that Genopatch was trained on. No humans (except us) were harmed in training this AI. 😁

    Also, be aware that the trained neural network is only part of the Genopatch process. There's a lot of computation going on. That's why we need 100% of your CPU.

    When it comes to the source data you feed into Genopatch and the result you get out. As you probably already figured out, there is no correlation in data whatsoever. If you need to credit or pay the creator of the source sound. I think that is more a moral question (if the result sounds almost 100% identical), than a legal one.

  • Manuel Senfft

    Thanks for the detailled answer, Fredrik! Good to know. (=

  • Rasmus Merten

    - Nikita Urbansky wrote:
    - Magnus Lidström wrote:
    Well, I can't claim it is trained on only good sounds. 😉
    Genopatch is not an AI that generates sounds from nothing but tags or text input; it generates sounds that should match a reference sample. The neural net is trained to learn how the Synplant engine works so that it can make educated guesses on relevant parameter settings for a given input.
    Hello,
    do you have scientific article on the details of algorithms you have been using in Synthplant2? I’m making a research on the topic for my MA diploma and I would be proud to include the best product on the market with references on research papers!

    https://arxiv.org/pdf/2205.03043.pdf - have you already seen this?

  • Magnus Lidström

    - Rasmus Merten wrote:
    - Nikita Urbansky wrote:
    - Magnus Lidström wrote:
    Well, I can't claim it is trained on only good sounds. 😉
    Genopatch is not an AI that generates sounds from nothing but tags or text input; it generates sounds that should match a reference sample. The neural net is trained to learn how the Synplant engine works so that it can make educated guesses on relevant parameter settings for a given input.
    Hello,
    do you have scientific article on the details of algorithms you have been using in Synthplant2? I’m making a research on the topic for my MA diploma and I would be proud to include the best product on the market with references on research papers!
    https://arxiv.org/pdf/2205.03043.pdf - have you already seen this?

    I had not seen that actually, but one of the authors contacted me on Instagram today(!). Funny. Our approaches seemed similar when I first skimmed through it, but they differed once I started looking into the details. Besides differences in how we approach the neural net, the core difference is that Genopatch doesn't rely solely on the neural network. Team play occurs between different ML paradigms; they feed off each other's work. This happens in real-time on your CPU, and you can see how it develops when you run Genopatch.

  • Clam R

    Absoutely fantastic work. I just watched the deep dive video on youtube and I'm blown away by the Genopatch model. I understand that you would not want to give away your secrets but I would be fascinated to learn more about the processes involved in training. Could you speak about it a bit or point to some relevant papers?

  • NewLoops.com (Download Synplant 2 Pro Expansion Demo!)

    - Magnus Lidström wrote:
    Well, I can't claim it is trained on only good sounds. 😉
    Genopatch is not an AI that generates sounds from nothing but tags or text input; it generates sounds that should match a reference sample. The neural net is trained to learn how the Synplant engine works so that it can make educated guesses on relevant parameter settings for a given input.

    Amazing!! It's so good too.

You need to be to post a reply

Sign In / Sign Up


First time here? Just enter your current email and sign up.
×
Facebook sign in no longer available. Use the same email to set password and access your account. If you need help, contact us.