Raspberry Pi Pico Synth_Dexed? – Part 2

Now that the initial elation at getting a reasonable sounding, er, sound from my Raspberry Pi Pico Synth_Dexed has worn off, I’ve been seeing what I can do about the performance.

TL;DR: this is all analysis and measurements, working out and attempting to understand how it all currently works. I haven’t fixed anything or improved it yet. I’m working on it. Read on if you want all the gory details.

Recall at the end of part 1 I found I could only really support the following:

  • 2 note polyphony at a sample rate of 44100.
  • 4 note polyphony at a sample rate of 24000.
  • 6 note polyphony with jittering and stutters only…

My main theory as I start on the next phase of investigation is that this is down to either (or both):

  • The fact that the Raspberry Pi Pico has no hardware floating point accelerator.
  • There is a bottleneck in the Raspberry Pi audio handling somewhere.

So these are the two things to investigate further at the moment.

Warning! I strongly recommend using old or second hand equipment for your experiments.  I am not responsible for any damage to expensive instruments!

If you are new to microcontrollers and single board computers, see the Getting Started pages.

Timing and Existing Performance

I’ve created some simple “timing by GPIO” routines that allow me to hook up an oscilloscope to get some idea of where the code is spending its time.

I have the following hooks as a starting point.

Main.cpp:

main:
  timingToggle(2) on every scan of the main while(1) loop
  timingOn(3)
  Update audio buffer
  timingOff(3)

Update buffer callback routine:
  IF new samples from Synth_Dexed are required:
    timingToggle(4)
    Synth_Dexed -> getSamples

I’ve also updated the code to see how changing the buffer size for both the Pico audio routines and Dexed itself will interact with the polyphony and sample rate settings

#define DEXED_SAMPLE_RATE 24000
#define POLYPHONY 4
#define DEXED_NUM_SAMPLES 256
#define PICO_NUM_SAMPLES 256

To show how these are interacting, here are traces for 6-note polyphonic, 24000 sample rate, both buffers 256 bytes in size, showing the two “toggling” timing GPIOs.

Every transition of the yellow trace (GPIO 2) corresponds to once round the main code loop. Every transition of the blue trace (GPIO 4) corresponds to when Synth_Dexed was called to fill the buffer with samples. The trace on the left is the “silent” trace and the trace on the right is when it is playing the 6-note chord.

Note that the loop appears to be running at approx 90Hz (there are two transitions for every period measured on the scope). As each period is outputting 256 samples, this gives us our sample rate of approximately 90×256 ~= 24000.

For reference, we can use the timingOn/Off measurement of GPIO 3 (blue trace below) to see that pretty much all the time spent in the loop is spent in the “update_audio_buffer” code.

I can’t capture the entire yellow cycle and show the blue trace, but there is just a small “low” block corresponding to the time between calls to the update function. Pretty much all the “high” time is spent in the update function.

Here are some traces using a 64 byte buffer for Synth_Dexed and a 256 byte buffer for pico_audio (so the same as before). Again, silent on the left, playing a 4-note chord on the right.

We can clearly see that once the callback is filling the buffer from Dexed (the blue trace) there are four calls in quick succession per each emptying of the 256 Pico buffer. The implication here being that there is some waiting time for the Pico buffer to play before the next call to fill the buffer from Synth_Dexed.

By way of contrast, here is the same settings playing the 6-note chord. We can clearly see that the time for Synth_Dexed to calculate a return 64 bytes worth of samples is only just about keeping up with the Pico playing 256 samples.

For completeness, here is a trace of the 256-256 buffer again but this time playing the 8-note chords (on the right). We can clearly see that the time spent playing the notes pushes out the time taken to play the samples, compared to the time when no note is playing (left).

The Pico has to be stalling whilst Synth_Dexed is calculating the samples when all 8 notes are playing.

It is interesting to see what happens when the sample rate is increased. These are the same silent (left)/4-chord playing (right) traces for a 44100 sample rate. The frequency of calls to fill the buffers has doubled, as one might expect, but now playing 4 notes pushes Synth_Dexed past the time it takes for the Pico to play its 256 byte buffer.

What can we take from all this? I draw the following conclusions:

  • Almost all the overhead seems to be in calculating samples, not in playing them.
  • Any playing overhead that exists is (as is to be expected) pretty much constant regardless of the polyphony of Dexed.
  • The current performance of Dexed is pretty much maxed out at 5-note polyphony for a sample rate of 24000. A few optimisations might just about get up to 6-notes – it plays pretty clearly with just the occasional glitches which might be solvable. But something pretty radical is likely to be required to go any higher…

In short, any improvement is probably going to have to come from optimising the Dexed code and the biggest suspected culprit is at the moment is the floating point maths.

I’ll come to the floating point subsystem in a moment, but looking at the pico_audio library and how I’m using it, I’ve noticed there seems to be a lot of copying of sample data between buffers going on at present:

  • Within Synth_Dexed, the samples are generated as floating point values and then converted to a signed 16-bit integer using arm_float_to_q15(), one buffer at a time.
  • Within my own code, samples are provided via the callback getNextSample() which returns samples, one at a time, to be placed in the pico_audio “producer” buffer grabbed in update_buffer() using take_audio_buffer().
  • Within take_audio_buffer, eventually the code goes through a sample conversion, but in my case this is a mono, signed 16-bit stream getting converted to a mono, signed 16-bit stream – but a copy from a “producer” buffer to a “consumer” buffer still takes place to achieve it.
  • Finally, DMA is triggered to get the data from the “consumer” buffer out to the I2S PIO driver.

This really feels like overkill! I should be able to trim down the copying at my end. I don’t know yet if there is a better way to get from the floats used by Synth_Dexed to signed 16-bit values, but it may be that it can be done a bit more “on the fly”. But I would really like to eliminate that producer to consumer copy if I can. Alternatively, maybe I could add a floating point buffer type and leave the conversion to that last minute copy.

There is a detailed analysis of the layers and buffer handling in the Pico audio library later in this post.

Pico DEBUG_PINS

I was interested in finding out how long the Pico takes in the various stages of the buffer transfers in the audio library. It turns out that there is provision for enabling “debug” pins at various points in the Pico’s libraries. This seems to be enabled with the following macros (these ones are from audio_i2s.c):

CU_REGISTER_DEBUG_PINS(audio_timing)
//CU_SELECT_DEBUG_PINS(audio_timing)

DEBUG_PINS_SET(audio_timing, 4);
DEBUG_PINS_CLR(audio_timing, 4);
DEBUG_PINS_XOR(audio_timing, 1);

These are defined in gpio.h in the Pico SDK but there isn’t really any documentation about them. From that I can see if you put a call in your main code to:

gpio_debug_pins_init();

And then uncomment one of the CU_SELECT_DEBUG_PINS() macros then the _SET, _CLR and _XOR macros become active and will set, clear or toggle a GPIO pin. By default, in gpio.h, the following defines are set up to start the DEBUG_PINS at GPIO 19:

#define PICO_DEBUG_PIN_BASE 19u
#define PICO_DEBUG_PIN_COUNT 3u

The _SET, _CLR, _XOR macros work on a bit-mask basis, starting at the _PIN_BASE. So if 3 _DEBUG_PINS are defined, the then following will set or enable _DEBUG_PINS:

DEBUG_PINS_SET(audio_timing, 1) ---> GPIO19
DEBUG_PINS_SET(audio_timing, 2) ---> GPIO20
DEBUG_PINS_SET(audio_timing, 4) ---> GPIO21

If there were 4 DEBUG_PINS enabled then setting (audio_timing, 8) would enable GPIO22.

Note: other subsystems have their own definitions instead of “audio_timing”.

Why mention this? Because the DMA IRQ handler uses _SET and _CLR on the third DEBUG_PIN (4, i.e. GPIO21) either side of the audio_start_dma_transfer() function, so this can be used to see how much time is taken up in that “converting” copy.

In the following trace, we can just about see (the small blue peak) that the time in the DMA handler is pretty insignificant compared to the time processing samples.

So at this point, I’ve decided I don’t need to worry about the extra copying that appears to be going on in the audio library itself.

Deep dive into Synth_Dexed getSamples

In my own code, I’ve switched the audio buffer filling code from the use of a callback function, that will fill a Dexed buffer and then pass it on one sample at a time to the Pico’s audio buffer, to my own custom update routine that just fills an entire buffer directly:

void fillSampleBuffer(struct audio_buffer_pool *ap) {
struct audio_buffer *buffer = take_audio_buffer(ap, true);
int16_t *samples = (int16_t *) buffer->buffer->bytes;
dexed.getSamples(samples, buffer->max_sample_count);
buffer->sample_count = buffer->max_sample_count;
give_audio_buffer(ap, buffer);
}

This eliminates the need to copy (one byte at a time, via the callback) from the Dexed buffer to the Pico audio buffer.

Now it is time to dig into the Dexed getSamples routine and attempt to really see what is going on. This can be found in dexed.cpp.

First of all, it is interesting to see exactly how much time is taken in the getSamples routine itself, so I’m using timingOn(4) and timingOff(4) at the start and end of the “real” getSamples and timingOn/Off(3) at the start and end of the integer version (that calls the real version and converts the samples).

This shows how time in getSamples compares (blue) to the default scan time (yellow) for silence (left) vs playing a 5-note chord (right) – i.e. something that plays successfully with no distortion.

Comparing the time in “real” getSamples (that calculates floats – in blue) with “integer” getSamples (that converts the buffer prior to returning – in yellow), we can see there is only a very marginal increase in overhead (left):

For comparison, on the right is the trace for playing a 6-note chord, which is where the stuttering starts to appear in the audio output. We can see how the getSamples (blue) is maxed out against the basic Pico Audio buffer filling (yellow).

Two more traces: on the left we have timing traces for the main “calculate a block of samples” routine. We can see four blocks are required to fill our 256 byte buffer. This comes from a block size definition _N_ = (1<<6) i.e. 64 (from here).

On the right we have the time taking inside the dx7note->compute function itself. This is called for each possible note, up to the maximum polyphony specified when we initialised Synth_Dexed.

With the buffer sample size of 256 bytes, we have four times round the “get a block of samples” loop (left) and with 5-note polyphony, we can see 5 calls to dx7note->compute (right) for each call to getSamples – so 5×4 or 20 calls in total.

Observations so far:

  • getSamples returns a sample buffer of floats, yet dx7note->compute returns 32-bit, signed integers. The other getSamples routine I’m using then converts these converted floats back over to 16-bit signed integers.
  • It would appear that reason for the above is the call to fx.process at the end of getSamples which happens on the entire buffer of (now float) samples. The time taken for this call, after obtaining the filled sample blocks, can be seen as the difference between the yellow and blue traces in the last set of oscilloscope screens.
  • The conversion of each note’s worth of samples from signed 32-bit integers to floats appears to happen due to the following line, which according to the traces, seems to take at least the same amount of time as calculating the samples in the first place on a per-note basis:
buffer[i + j] += signed_saturate_rshift(audiobuf.get()[j] >> 4, 24, 9) / 32768.0;
  • This line effectively turns the 32-bit signed value (so -2147483648 to 2147483647) into a -1.0 to +1.0 floating point number, using a 32-bit floating point representation (i.e. a “single” float).
  • Then it adds the final result to the value already in the buffer (which starts off at zero).
  • It would appear that it does this as fx.process (from PluginFx.cpp) applies the filters but only works exclusively with floats.

One thing has been confirmed though. Looking at the assembly listing produced as part of the build process, I can see several calls to the Pico’s “aeabi” wrapper functions, which I believe are the “faster” (compared to the compiler’s own) ROM implementations of (single or double) floating point routines:

So yes, there is a fair bit of floating point conversion going on, but yes, the code is already using the Pico’s faster library for floating point operations.

As an experiment I commented out the call to fx.process() and found I was able to squeeze in another note of polyphony, taking me to 6-note polyphony with hardly any artefacts! But I’m still at a sample rate of 24000 and now have no filter!

Int to Float to Int again

So, digging deeper into these conversions. Within the float32 version of getSamples, the following is going on:

  • dx7note->compute returns a sample for any “live” note as a signed, 32-bit value.
  • these values are translated into a 32-bit floating point value in the range -1.0 to +1.0 using the above mentioned code.
  • these are processed via fx.process() and returned to the calling function.

I’m not entirely sure I can untangle the shifting and dividing going on here, but I think the following is happening:

buffer[i + j] += signed_saturate_rshift(audiobuf.get()[j] >> 4, 24, 9) / 32768.0;
  • The value to be shifted is first normal right-shifted by 4 to yield a 28-bit signed value presumably… it isn’t clear if this will be an “arithmetic bit shift” or a “logical bit shift”. In the former the sign should be “shifted in”. In the latter, it won’t… I’m guessing it has to be arithmetic, otherwise I don’t see how it could ever work…
  • Then it performs a “signed saturated right shift” of 9 places, presumably setting the “saturation” to 24 bits (0x800000 to 0xFFFFFF or +/- approx 8.4 million). I’m not entirely sure why this is required, as wouldn’t shifting by 4 then 9 result in a 19-bit number anyway…?
  • Finally it divides the result by 32768.0 which is essentially another shift right by 15…

We know this leaves a value in the range -1.0 to +1.0, but it isn’t entirely clear to me how these various combined shifts of what appears to be 28 (4+9+15) places gets us there.

Interestingly, this does all related to the original MSFA code (from here):

int32_t val = audiobuf2.get()[j] >> 4;
int clip_val = val < -(1 << 24) ? 0x8000 : val >= (1 << 24) ? 0x7fff :
val >> 9;

Continuing on, we can see that in the int16 version of getSamples, the buffer is converted back again from a float to a signed, 16-bit value using the following code:

arm_float_to_q15(tmp, (q15_t*)buffer, n_samples);

This is one of the ARM DSP library functions and converts a 32-bit floating point value into a Q15 fixed point value. There seems to be some ambiguity quite what the 15 stands for. Apparently for an ARM system this will include the sign bit, so this would be a number between -1 and 1 with 14 places after the “decimal point” (although in this case we’re talking binary, not decimal of course).

Reading a Q15 value directly as a signed 16-bit value would thus give you a value between -32768 (0x8000) and 32767 (0x7FFF), so the float to q15 function is effectively equivalent to: q15_value = float_value * 32768, assuming a floating point value between -1.0 and +1.0.

We can see this is the complete opposite of what the “/ 32768.0” is doing in the conversion code from the “real” version of getSamples. We can therefore trust that the appropriate bit-shifting (by 4, saturated by 9) has the end result of leaving us with a Q15 equivalent value which is then converted again to the -1.0 to +1.0 range via the “/ 32768.0”.

This presents the possibility that we can leave out the integer to floating point to integer translation completely if we could just convert the filter “fx” code to also work on Q15 fixed point numbers.

In the meantime, mirroring the “comment out the fx.process” step which gave us 6-note polyphony, this is what happens when the floating point step is removed from getSamples completely. On the left is the floating point version with no fx-process step; on the right is the Q15 version also with no fx.process step (yellow = complete getSamples step; blue = dx7note->compute step):

We can really see how much time is taken up in the conversions here. It opens up the possibility of more than 8-note polyphony if the filter could be rewritten to use fixed-point maths.

Interestingly, the original “music synthesizer for Android” (MSFA) says it was optimised for 32-bit fixed point maths. It also includes a fixed point filter calculation, but the comments imply it is a simplification or “initial version”, so it isn’t clear at what point the floating point implementation used in Synth_Dexed came along.

For what it’s worth, it would appear (assuming I’m reading this right), that the original DX7 had a 14-bit sample format and 12-bit envelope, so I’m wondering if that is how we can bit-shift by 4 then 9 places and end up with a -1.0 to 1.0 range… that would seem to make sense…

Also, it would appear that a configurable filter stage appeared in Dexed itself, but isn’t part of the original MSFA code, and there doesn’t seem to be any mention of a filter in the original DX7 that I can find. So actually, I could just drop the filter and other effects and then I’d probably end up with a fully integer synth. In fact, the Dexed FAQ does actually say this:

  • “msfa / Dexed is an integer based synth engine, it uses the Q** format.”

So I’m starting to think just leaving out the filter stage could be a legitimate option. I will have to implement volume somehow though – and then decide if that should be channel volume or “master volume” (in MIDI terms).

If not, then all this seems to suggest it would be very worthwhile attempting to replace the floating point filter routines with a fixed point equivalent – but that really won’t be a trivial undertaking.

Another option might be to go for a simpler filter application – the original MSFA includes an integer-base resonant filter implementation which might suffice (resofilter.cc).

But all that will have to wait for another time.

Below are two more detailed dives into how the Pico supports floating point and how the Pico Audio library works.

Kevin

Floating Point Library

Synth_Dexed makes use of the ARM CMSIS DSP code for a range of floating point calculations. These had to be pulled in to allow it to build. At the end of part 1 I found out how to replace the CMSIS library en masse with just the few, relatively isolated, functions that Synth_Dexed was using so if it comes down to optimising this code somehow, at least I know the size of the task!

The Pico Audio Library

As part of chewing over how everything is working and where the overheads are likely to be, I’ve been trying to understand how the Pico audio library works, just to get my head around where it might be taking time and where there might be alternative ways to use it to improve things.

It’s a bit complicated!

The top-level principle is that the “user code” acts as an audio producer and the I2S driver code acts as an audio consumer. I2S is implemented using the PIO subsystem and is fed using the hardware DMA peripheral from a pool of buffers managed by the audio library.

I’m using the Pimoroni audio.hpp code which essentially as the following structure:

init_audio:
  define the audio format to use, sample rate, etc
  CALL audio_new_producer_pool() to set up a pool of producer buffers
  CALL audio_i2s_setup() to configure I2S
  CALL audio_i2s_connect() to initialise I2S
  CALL audio_i2s_set_enable() to turn it all on

update_buffer:
  CALL take_audio_buffer() to get a free producer buffer from the pool
  fill the buffer with samples using a callback mechanism
  CALL give_audio_buffer() to queue the buffer for processing

So there is no i2s read/write functionality directly visible – that is buried within the PIO I2S layers, so the basic idea is to just keep the buffer filled enough to allow the DMA and PIO to do its thing.

So digging into these calls a bit more to see exactly what is going on…

As already mentioned the library works on the idea of producers and consumers and allows you to define the connections between them. The connection is a structure that links the take/give routines for producers and consumers together.

The default connection is defined in pico_audio.c with the following structure and the following listed four functions:

~~ pico_audio.c ~~

static audio_connection_t connection_default = {
.producer_pool_take = producer_pool_take_buffer_default,
.producer_pool_give = producer_pool_give_buffer_default,
.consumer_pool_take = consumer_pool_take_buffer_default,
.consumer_pool_give = consumer_pool_give_buffer_default,
};

producer_pool_give_buffer_default(connection, buffer) {
queue_full_audio_buffer(connection->producer_pool, buffer)
}

producer_pool_take_buffer_default(connection, block) {
return get_free_audio_buffer(connection->producer_pool, block)
}

consumer_pool_give_buffer_default(connection, buffer) {
queue_free_audio_buffer(connection->consumer_pool, buffer)
}

consumer_pool_take_buffer_default(connection, block) {
return get_full_audio_buffer(connection->consumer_pool, block)
}

The I2S sending (consumer) code uses the default connection for give_audio_buffer() but replaces the take_audio_buffer() connection code with wrap_consumer_take().

The rest of the audio_i2s code has the following functionality:

~~ audio_i2s.c ~~

audio_i2s_setup:
  Initialises PIO, DMA, DMA data requests (DREQ_PIOx_TX0)
  Set up audio_i2s_dma_irq_handler() as the DMA interrupt handler

audio_i2s_connect (prodpool):
  CALL audio_i2s_connect_thru(prodpool, no connection):
    CALL audio_i2s_connect_extra(prodpool, no connection):
      CALL audio_new_consumer_pool() for a new consumer buffer pool
      Set up a consumer connection called m2s_audio_i2s_ct_connection
      CALL audio_complete_connection to link the consumer to producer

audio_i2s_dma_irq_handler:
  IF finished playing the last buffer:
    CALL give_audio_buffer to return the consumer buffer
      CALL consumer_pool_give function
        -> consumer_pool_give_buffer_default() to queue the free buffer
  CALL audio_start_dma_transfer()
    CALL take_audio_buffer() for a new consumer buffer
      CALL consumer_pool_take() function
        -> wrap_consumer_take()
          CALL mono_to_mono_consumer_take() - in my case
            CALL Mono-FmtS16 to Mono-FmtS16 consumer_pool_take()
              CALL get_free_audio_buffer() from consumer pool
              CALL get_full_audio_buffer() from producer pool
              Perform any sample conversions whilst copying from p to c
              CALL queue_free_audio_buffer() to return to producer pool
              return filled consumer buffer to call stack
    IF no buffer ready to play, just output silence
    Configure DMA for the new consumer buffer
    CALL dma_channel_transfer_from_buffer_now with the consumer buffer

The range of C++ templated consumer_pool_take() functions is defined in sample_conversion.h to allow for conversions between stereo or mono, and different formats: unsigned or signed, 8-bit or 16-bit. Each will involve a copy from a producer buffer to a consumer buffer performing any necessary processing on the way.

So to summarise the buffer actions, it is essentially the sequence of get_free/queue_free routines with get_full/queue_full routines acting on either the producer or consumer pools.

The specifics for the I2S sending are as follows:

User calling:
- take_audio_buffer (producer pool)
---> uses get_free_audio_buffer() from producer pool
- full buffer with data
- give_audio_buffer (producer pool)
---> uses queue_full_audio_buffer() to producer pool

When DMA data request triggers:
- give_audio_buffer (consumer pool)
---> queue_free_audio_buffer() to consumer pool
- take_audio_buffer (consumer pool)
---> uses get_free_audio_buffer() from consumer pool
---> uses get_full_audio_buffer() from producer pool
---> transfer data from producer to consumer buffer
---> uses queue_free_audio_buffer() to producer pool

Some of the functions have a parameter that suggests there is an option for a blocking or non-blocking driver. They key issue appears to be waiting for a free or full buffer to be made available via the appropriate queue function.

To do this, the library is using the __wfe() and __sev() ARM event handling system. If blocking, then the get function will use wfe to wait for an event from the queuing function.

At the lowest level the pools are managed using spin_lock_blocking() and two spin locks called the free_list_spin_lock and prepared_list_spin_lock, which are both created for each new buffer pool.

This seems to be working fine as far as I can see, but there does seem to be a lot of processing of various buffers involved! The library seems very flexible supporting different audio output types (PWM, I2S, SPDIF) and a range of audio formats.

In particular that extra buffer copy as part of the pre-DMA setup seems pretty superfluous in my case as no conversion should be required – the eventual “conversion” routine is for mono, signed 16-bit to mono, signed 16-bit, so that might be an option for some optimisation. It won’t affect the sample rate playback, which is fixed by the DMA/PIO, but it might allow for some additional CPU cycles that could allow Dexed more processing time to calculate new samples.

It also all happens in an interrupt routine, which is slightly surprising, as typically we’d want these to be as short as possible. There may be other ways of passing these buffers around that doesn’t require a copy prior to DMA. There are also quite a lot of layers involved in each action too. I wonder if a simpler buffer implementation would give more processing time back, but until I have some measure of the actual time taken in any of these calls, it is all speculative.

There are a few other Pico I2S implementations I’ve found that also use DMA/PIO, so I might have a look at those too to see if it looks like there are any optimisations to be made for my fixed case (fixed format, I2S only, mono, just output):

But going on the measurements I have, any performance limitations are still in getting the Pico to calculate samples and fill the buffers so it’s probably not worth worrying too much about the audio library at this point.

Kevin

6 thoughts on “Raspberry Pi Pico Synth_Dexed? – Part 2

  1. You really dive into it! Bravo!

    My Raspi 1 MiniDexed is now working fine with an OLED display.
    I’m going to have a closer look at the wiki.
    My project is to put several different synt modules in a rack.
    I’m also planning to build a sequencer, eventually using the Pico whom I like very much.

    Liked by 1 person

  2. By the way, I have another question that maybe you can answer.
    In a digital synthesizer it would be logical to connect the different modules via I2S.
    I’m relatively new to this, and I’m wondering if it would be possible to use synchronized BCLK and LRCLK for all modules so that only 1 wire would be needed to transport the audio data. Do you know if this is possible?

    Like

    1. I don’t I’m afraid. I guess as I2S is simply a somewhat unsynchronised audio stream, then it might work, but I don’t know enough about I2S to really know. This page has quite a bit of information about it, so that might give a clue: https://github.com/malacalypse/rp2040_i2s_example

      Of course with a modular synthesizer it isn’t just audio needed to be passed around, there are gate signals and timing and so on. I don’t know how that would work with I2S…

      There have been some synth modules that linked up using I2C – I think they might have used MIDI over I2C. I did my own MIDI over I2C too (its all on this blog somewhere!).

      I guess it depends at what point in the audio chain you are linking things up – if it is just for final mixing, then maybe you could link via I2S, but if you are talking note signals, then probably not. Also, I don’t know how polyphony would work if at all.

      Kevin

      Like

      1. Thank you Kevin.
        Anyway, it was just an idea. I would use it only for audio signals, not gate, not LFO etc.
        But you are right, then it is not so universal. Nothing is as universal as using voltage control, but then you have the noise problem. It was just an idea to avoid that.

        I have put a summary of my MiniDexed experience to my homepage:

        Click to access MiniDexed_01.pdf

        Liked by 1 person

Leave a comment