![]() I know I'll have to work with some sort of "prediction" of the input packets for getting continuous SMPTE but that won't be that hard of a problem. I think the program handles frame jumping pretty well (will do some tests with it now). I'm going to send it to Resolume (video program), might be with internal audio routing. Thanks for the answer! I know it's going to be very hard to get a reliable "clock" but the problem is the device only sends network packets from where I can get the tempo/position. In the cases where you use LTC, usually everyone would just spend the $200 and have a fully compliant signal instead of futzing and futzing to get gear to talk. This is why LTC is normally generated and read by a tiny MCU in hardware connected to the PC. Between the inherent aliasing of sampling and all the filtering on a typical digital audio output, it is hard to get the required levels and wave shapes. If you were to hook a SMPTE generator to a typical PC audio in, record it, and pay it back, it would sound like the painful warble to your ears, but most LTC readers would not lock to it. The other big problem is that, while LTC SMPTE can be wired like an audio signal, it's atypical. If they drift apart you gradually speed up or slow down the code you are sending up to about +/- 2% or so.Ĭonverting from a MIDI clock or beat message would be an extra step, but an LTC receiver cannot typicaly handle radical changes in clock speeds or instantly deal with large jumps in clock. You basicaly have the time you are sending and the time you are receiving. The normal way to deal with this in broadcast equipment is to do a software phase lock loop of sorts. Non incremental time messages also send most equipment into resync mode. You can't instantly convert received MTC messages to a new timecode message because each frame must be sent in its entirety or the receiver will think it lost sync. In that case, you have the time you are receiving + MIDI latency and jitter, and the time you are sending. You periodically get a complete time message, then short sub frame messages in between. There is a standard for sending SMPTE over MIDI, called MTC. So the first potential problem is what you are converting from, to. Each frame is 80 'bits', with the bits being sent as a frequency modulation encoding scheme between approximately 4800 Hz and 2400 Hz (at 30 fps). LTC is a running 'clock', 24, 25, 29.97 (drop fram), or 30 frames per second, then counting frames, seconds, minutes, hours. Having generated and read LTC timecode a lot over the years, it sounds like your general scheme might be problematic. Thanks! Hope you guys understand my explanation :) It has to be as realtime as possible so I gues setting the samplesize as high as possible and buffer as small as possible (without things being instable). using processBlock(AudiSampleBuffer, MidiBuffer) where I first write things to the midi buffer and convert this in the processBlock to audio signals? writing the audiosamplebuffer with the processblock and variables for what position the input signal is using pre recorded SMPTE and set the position of this with the playhead? (doesn't look like an "elegant" solution What would be the best way to do the SMPTE sender? But I'm not sure what the best way to do this (I don't have much experience with audio programming). ![]() Now I want to generate a SMPTE audio signal form this information with a delay as low as possible. ![]() I already got a MIDI clock sender (with a highrestimer) that sends 24 messages for each beat (input is only 4 messages per beat). I'm trying to build an application that recieves certain network packets with information in it from the clock it's playing (i recieve BPM, on what beat it is and information about the position in the song).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |