What is it?
The <Neuron> synthesizer is a new device for synthesizing sound based on Prosoniq's proprietary audio rendering resynthesis. The concept of this synthesizer is actually quite simple: it has two basic sound generators, called Resynators, found to the left of the main display. A Resynator is, like the counterpart from the analog world, some kind of oscillator that can reproduce sound from a Model that is loaded into this unit. Let's talk about these ominous Models first, because most of what <Neuron> does can be understood once you have made yourself familiar with this term.
A Model is, as the name suggests, some kind of structure that holds information on the sound that you wish to play. Think of computer generated imaging: if you want to create a three dimensional scenery you usually start out building a wireframe model of the various objects that should be in the scene. You can then resize, scale or distort them until you are satisfied, ultimately applying texture, lighting and reflection parameters to render the final scene in photorealistic quality. Models in the <Neuron> serve a very similar purpose. They are like wireframe models of musical instruments that you can scale, distort, resize or tweak in other ways to achieve whatever it is you wish to achieve with your sound, finally rendering them by playing them on the keyboard, getting sound as audible output of the process.
Now wait a minute - isn't that the same as a physical modelling synthesizer? No, not really. Physical Modelling relies on someone building an actual computer model of a violin, or a piano, to finally play this instrument on a keyboard. The tricky part here is actually building the model. If you intend to create a real sounding violin, you would require no less skills than someone who were to build a real violin out of wood. Clearly, this is beyond the abilities of most people who just want to create interesting sounds. Therefore, with most physical modelling synthesizers you do not have access to the basic models and cannot change their fundamental properties. If you did, the chance of creating a playable (let alone good-sounding) instrument would be slim.
So how does <Neuron> create its Models? Sounds easy enough: from your sampled sounds. A sampled sound generally is an output of an actual physical model. If you record your voice, the physical model would be your vocal tract. If you play a trumpet, the sound is a pressure wave created by that instrument. Again referring to the world of computer generated images: a photograph of a real scenery would be to the real scene what a sample is to the real instrument.
The Neural Part
Still, it is not clear how we create our Models from an actual sound. Did the marketing blurb tell you that <Neuron> uses a process that involves artificial neural networks? Well, that's correct. An Artificial Neural Network (ANN for short) is a structure simulated on a computer that works a bit like real living nerve cells. We won't go into detail here (the interested reader is referred to this site for a good explanation of ANNs), but ANNs can be very helpful in recognizing an underlying process from its sampled (ie. observed) output.
For example, ANNs are used to recognize speech, handwriting or to beat you in the game of chess. Speech is an audible sequence of sounds (called "phonems") that represent information from a written piece text. Handwriting is another sequence of symbols (characters) that may very well look quite different when written by different people, but who still convey the same underlying information. Chess movements are the output of a complex set of rules with a hidden strategy, aimed at defeating your opponent. Again, the actual movements are the observed output of a model (the Chess rules) behind your thinking. All these are examples of where an ANN can be used to recognize an underlying set of rules (a "model") behind a process.
During analysis, this is what the <ModelMaker> software does. It takes the one-dimensional sampled sound and tries to estimate its underlying instrument model. It is a bit like recognizing a scenery and its objects from a photograph. Again using our visual analogy: <Neuron> does something similar to a person building a computer generated scenery from a real photograph of a real scenery. Of course, you may be wondering whether this is possible without ambiguity, and of course it's not. However, this is where the analogy to the visual approach ends: a scenery has depth, consists of three dimensions. A photograph still has height, width and shade of color - a sound does not. Although an instrument model is a three dimensional object in space, its sonic properties do not require the knowledge of all spatial dimensions.
Even though, there still is ambiguity in this process, which is why there are different Parameter Sets. A Parameter Set is a collection of "presets" tailored to a specific "family" of sounds. It's like telling the visual recognition process that on a given photograph you have mostly people, or buildings, or objects on it. Just like it makes little sense to apply parameters that are specific to people (eye color, hair length, head size) to buildings, it makes little sense to apply parameters that refer to a stringed object to a woodwind instrument. At the same time, interesting or unusual effects can result if you do, so there is another huge potential for creativity by using the synthesis in a way it's not supposed to work.
Each of the two Resynators can hold one Model. A Model can either consist of a single converted sampled sound (a Single Model), or of a stack of converted samples that are distributed across the keyboard (a Multi Model). A Multi Model is the equivalent to a multi-sample, except that it is a model derived from a sampled sound and not the sample itself. A Model can also consist of a High Velocity Model and a Low Velocity Model, which can be entirely separate Models. <Neuron> switches between them during playback, depending on the velocity of the note played. Unfortunately, with the present processor speeds neither a velocity morph nor a zone morph is possible. This will be a future option when faster processors are available for the <Neuron>. Note that a velocity morph between the two Resynators is actually possible, so you can still create velocity driven morphs between two entirely different Model s!
In the Resynators, there are a number of parameters available for instant access. These parameters are defined by the Parameter Set selected during analysis (or on the machine itself - a feature that will be available in version 2 of the NeuronOS). There are two groups of parameters (called Scape and Sphere) with 3 levels of parameters each, holding either two reciprocal or four independent parameters.
Scape and Sphere are groups of parameters that are conveniently grouped to provide easier access: Scape parameters usually refer to parameters that influence the basic vibrating medium (air column, string) of an Model while Sphere parameters control the shape or making of the corpus, the resonant body of the virtual instrument.
As we have seen earlier, the <Neuron> has two identical Resynators to hold Models. There is an additional control between these two otherwise identical units, called the Blender. While you can tweak the parameters of the Models directly with the stick controllers in the Resynator, the Blender allows for interaction between the two Resynators. Depending on what Blender Mode you choose you can use the Sphere (ie. the shape of the body) of one instrument while you use the Scape (e.g. string) from another. That way you can build all kinds of instruments, even create a dynamically varying transition or morph between these two instruments. All of these parameters can be assigned a controller so they can be remote controlled from the keyboard or a sequencer program via MIDI.
The Slicer is sort of a panning/LFO type effect, like a "Auto-Panner on Steroids". It can be either horizontal , which means plain left and right panning, or 3D, which adds some kind of chorusing and flyby effect to it. At any rate, great for creating swirling pads and lush strings that dynamically float about the stereo field.
The Silver section is the effects and filter section of the <Neuron>. Some reviews have suggested that turning it off will make the sound far less impressive. Of course: the basic output routing of the <Neuron> reproduces all sounds in mono due to processor contraints. So at least a stereo delay should be applied to add some width to the sounds, which neither is unusual nor otherwise detrimental to the sonic quality. Although there is a Blender Mode called STEREO, which pans one Resynator to the left and the other to the right, you would usually want the Blender for something else than just panning, for example, intense morphing or velocity transitions.
Available Silver effects are grouped into Time and Frequency effects and include a wide selection of modulation effects (chorus, l/r delay, phaser...) as well as compression, EQ and even unusual effects such as sp_warp, which is actually the same alien world effect as the Prosoniq PiWarp. All delay effects dynamically change their delay time, meaning that glitches due to changing delay times are a thing of the past. Only very few synthesizer do this, the Ensoniq VFX being one of them. This even works with the tap tempo button (master fx only) which can be used to sync the delay times by simply tapping the tempo manually - very cool for live gigs!
Filters include various characteristics, including the famous Prosoniq 24dB/Oct LPF - the filter from worlds most widely used VST plug in NorthPole.
To be continued...