lid55
MidiFire Beta
Posts: 75
|
Post by lid55 on Mar 9, 2018 16:20:49 GMT
Hi Nic, I was wondering if you could shed light on a MidiFire issue. It’s about the micro-timing of messages in MidiFire. So what I have is a CC from a MIDI foot controller entering MidiFire, which in turn is sending three main messages: - one Clock message to start the internal DynamicClock module - one CC sent to another app to start its transport - one CC sent out MIDI out of the audio interface that triggers a laptop sequencer
To get these all working in sync, I’m using delayed (+D) messages at various values, additionally, since I’m using +D for the internal Clock message being sent, I have to use at least one +I
Through trial and error, I have found that placing the MIDI Clock message first in the ruleset produces more accurate results, though I have no idea why that is the case.
Additionally, right now I have it set up that I’m using +I for all of them even though only the internal Clock message needs it, because I have a hunch that then they’re all put on the same “thread” or stream and therefore their timing is more accurate (though I admit this hunch about “threads” is not yet based on actual testing)
Here’s an example of my ruleset:
IF M0 == B1 01 7F SND FA +D27 +I # start internal MIDI Clock SND B4 M1 M2 +D6 +I # start laptop sequencer SND B2 28 7F +D223 +I # start app sequencer (on this iPad/iPhone)
SND CF 00 # select different preset on MIDI foot controller ASS G30 = 00 # set MidiFire variable END
So I include the last two rules because I have them in there, and in case they unintentionally affect the timing, but they aren’t relevant to the main sync timing I’m concerned about, which is trying to get the first three messages as consistent as possible to each other in regards to timing. I’m measuring the timing by recording audio blips and comparing them to each other in an audio waveform on the laptop. So, I’m working in units of ms, but they are different depending on which device I’m using (for example... there’s less timing variation on my faster,newer iPhone8 compared to the timing on my iPadAir2).
Should I try and find a way to avoid +I messages? or use MidiFire1-style +C messages instead? Any advice or insights are greatly appreciated.
Regards, - Brett
|
|
nic
Soapbox Supremo
Troublemaker
Press any key to continue
Posts: 2,011
|
Post by nic on Mar 9, 2018 16:46:55 GMT
Hi lid55, The +I messages are scheduled to an internal queue and delivered close to time, but actually, I would expect normal messages with timestamps to be delivered more accurately to the external devices with CoreMIDI. Here is something to consider. There is a delay of around 200ms before the clock actually starts after seeing the start FA which is why you can add these delays to shift things around. However, what you might want to do is trigger the other two events once the clock sends its own FA (not the one you are triggering with). Although, why do the two sequencers need those CCs to start? Should they not start when they see the clock and thus be in perfect sync? Anyway, I was thinking something like: In a Stream Byter connected to clock input B1 01 7F = FA # start clock
Then in another Stream Byter that the clock output would be connected into: IF M0 == FA # start other sequencers ENDI guess each sequencer may take a small amount of time to sync to the clock when it starts, but once they have synced they should remain coupled to the single clock source. I haven't tried any of the above. I'm just about to go off grid for a while. I will be checking in sporadically, but probably can't verify any of my ideas until Tuesday week. Sure, if you have it all working, maybe don't try and fix? :-) Regards, Nic.
|
|
lid55
MidiFire Beta
Posts: 75
|
Post by lid55 on Mar 9, 2018 18:21:32 GMT
Hi Nic, thanks very much for the quick response, much appreciated, now I can work on it today before my weekend starts (Canada time here).
And thanks for the info, I had totally forgotten that DynamicClock sends its own FA/startClock message when it receives one. I will try this out and post back.
Have a good one! -Brett
|
|
lid55
MidiFire Beta
Posts: 75
|
Post by lid55 on May 18, 2018 22:51:18 GMT
Hi Nic, I finally had some good time to get into this again. I have a couple questions I was hoping you could answer. I'm using several StreamByter modules... some are in parallel, some in serial. I'm trying to get at micro-timing sync details (as described above), so I'm wondering how StreamByter rules get processed through the separate StreamByter modules: is there a big list in the background? Does the placement of the modules on the canvas matter? Does the order of module creation matter? I guess I'm trying to control for having certain messages leave MidiFire at EXACTLY the same time (or I guess the closest to that is one right after the other).
Also... there are certain cases where I have a module connected to two outputs, and the default is for MidiFire to duplicate the messages to both outputs. Is it advisable, from a speed of processing standpoint, to just shoot any irrelevant messages out the two outputs and let other apps filter the messages, or is it faster and/or better to +B the irrelevant messages before they hit those outputs. I'd have to use two new StreamByter modules to create each +B. Also, is StreamByter +B better or faster than adding something like a ChannelStrip module? (which could also work in this case)
I don't know if you've come across it, but the new iOS processing has seemed to me a bit less accurate lately (in terms of audio priority). Short of learning how to totally program my own apps, I'm just hoping to control what timing details I can through MidiFire for now.
Regards, - Brett
|
|
nic
Soapbox Supremo
Troublemaker
Press any key to continue
Posts: 2,011
|
Post by nic on May 19, 2018 15:59:36 GMT
Hi lid55 , That's a good question. Everything (usually) starts with a MIDI event coming in on an input and the order in which the connections to that input were made originally is the sequence of modules that the event goes to. It then follows the connections (again in the order they were made) of the next module and works its way to dead end (eg. output). So, it's the order in which you make the connections that dictates the order in which modules are called with the current list of events. Each module has its own data (so, yes each SB is executed individually without reference to other modules' rules) and is asked to examine an incoming set of events and return a (possibly) modified list events when done. Blocking an event in MidiFire is very efficient (either through the Stream Byter, Channel Strip or Protocol FIlter). I'm not sure I could even give you any indication about which might be faster; the difference will be negligible. I would tend to only send the actual required events out to apps rather than expecting the apps to ignore stuff, but that's just me. I don't really think it matters. For the record, the Stream Byter text is not interpreted during processing; the 'Install Rules' stage converts the text to (more or less) native code, so it's also very fast. Regards, Nic.
|
|
lid55
MidiFire Beta
Posts: 75
|
Post by lid55 on May 21, 2018 16:04:39 GMT
Thanks for the excellent info Nic. I think I understand, but to make sure, I'm hoping you can confirm that the following example is correct: - if SB(StreamByter)ModuleA has its output connected to both SBModuleB and SBModuleC, (and assuming they were connected in the order of A to B, then A to C), then the rules will go from SBModuleA to SBModuleB first, and then to SBModuleA to SBModuleC. But, if I instead wanted to process the rules from SBModuleA to SBModuleC first, I’d disconnect both connections, and then reconnect both but making sure to connect A to C before A to B. - [OR... do you have to delete and create new modules/connections to affect this order of processing the rules. If this is the case, do the new modules have to have unique names, so as not to be confused with the old modules?]
Also, to see if I understand how the processing of the rules works: so a single ruleset that has 20 rules/lines in a single StreamByter module will process as fast as the same 20 rules divided into single rules in 20 StreamByter modules connected in series?
|
|
nic
Soapbox Supremo
Troublemaker
Press any key to continue
Posts: 2,011
|
Post by nic on May 21, 2018 16:30:32 GMT
Hi lid55 , Yes, your understanding is correct. You only need to change the connections to affect the order; delete/recreate the modules is not necessary. Second question: In terms of the the time it takes to process 1 x 20 lines vs 20 x 1 lines is indeed the same, but to pass an event through 20 modules instead of 1 *will* have some overhead since the engine has to push it through 20 modules instead of 1 ... and if you have visualisation switched on then that's 20 modules to flash! - if you're looking to squeeze performance you can turn event visualisation off in Setup. However, in all of this, I think the differences between the various methods are going to be immaterial. To push an event through a scene from in to out is likely to be measured in thousandths of a millisecond. The bottlenecks are probably in the delivery and dispatching parts (ie. CoreMIDI) Regards, Nic.
|
|
lid55
MidiFire Beta
Posts: 75
|
Post by lid55 on Jun 13, 2018 16:11:25 GMT
Thanks for all the info Nic, I find it's quite a bit to wrap one's head around... figuring out the details of timing through apps and iOS. Seems that most of the timing issues in MidiFire are negligible... but... I think once those MIDI messages hit iOS (Core MIDI)... it might be another story.
You mentioned above that: I would tend to only send the actual required events out to apps rather than expecting the apps to ignore stuff. So I have a question about using ports.
Will you always use a unique port for each destination? The fact that number of ports in MidiFire is selectable seems to state that using less is better/faster? ... is it possible that using the same port in MidiFire could potentially improve timing issues (at least in my case here, described above earlier in this thread)... since the MIDI messages hit CoreMIDI at the same time? Any idea if CoreMIDI processes MIDI messages the same way MidiFire does? In other words, does CoreMIDI take the first MIDI message it receives and totally complete its path to destination before moving on to the next MIDI message?
I've found that, for my system, sending a MIDI message from MidiFire out the hardware/interface and into my desktop computer's interface can have a variability as high as 19ms. Sending MIDI messages within the iOS device itself (to other apps) is more on the scale of a variability of 3ms.
Regards, - Brett
|
|
nic
Soapbox Supremo
Troublemaker
Press any key to continue
Posts: 2,011
|
Post by nic on Jun 14, 2018 5:49:02 GMT
Hi lid55, I don't know the inner workings of CoreMIDI, but I suspect it works differently since it uses timestamps to decide when to schedule events. There's a separate 'MidiServer' process that receives events from apps over (I think) a 'mach port' which is like an internal pipe. MidiServer probably (and this is just a guess) runs multiple threads but probably boils down to one that receives events and puts them in some sort of queue and another that could pick them off the queue and transmit them to destination when the time arrives, or immediately if the time has already passed. Using the MidiBus app on an iPhone SE looping back a clock signal to/from CoreMIDI measures latency as around 0.1 ms with +/- 0.007 ms standard deviation. Thus, MidiServer itself seems pretty efficient. In terms of sound generating apps, there could well be latency related to the audio engine cycle. Say the audio engine is running at 44.1Khz and using a 512 byte buffer, then by the time an event sounds there is a margin of error of 1 render cycle, which would be roughly 11.6ms wide, so std deviation would be about 6ms in that case. If the (global to all apps in iOS) sample buffer size is reduced, then this deviation should also be reduced. If you're getting deviation of 3ms then that might suggest the sample buffer size in use is 256. As for the hardware, well I guess this will depend upon all the steps needed to get event from A to B. DIN MIDI has a throughput of 31250 bits/second so to push one event would need, what 0.8ms or so. However, the latency in getting that event through the generic USB driver down the link and then interpreted has to be taken account. Of course if there are multiple messages being sent at the same time, these have to be queued to the link so will add latency. Regards, Nic.
|
|