I didnt know that but it makes perfect sense, thanks. I really dont have the hard disk space to download the reason+ trial but I am sorely tempted to try mimics different stretch options.
Mimic: New Creative Sampler
Hey, been playing around with the Mimic and it's slice mode. So we can't pitch up n down every slice invidualy then? And the workaround would be automating the pitch-semi? Would be great to pitch up/down the slices for the vocals.
I would use Dr. Octo Rex if you want to adjust parameters and effects on individual slices.
Sure, but Octorex doesn't do timestretch though.
Edit: Why couldn't we select one slice and choose a semi setting for every slice? Would solve the whole case.
Edit: Or then the Multi-Slot would also slice?
Mimic doesn't need to replace Dr. Octo Rex. I think if they can incorporate a REX export (in addition to import of full REX files), that would be great for workflow, so then if you decide you want independent control over slices, you would move directly to Dr. Octo Rex from Mimic (rather than having to go to the sequencer to create a REX loop).
I think inside Mimic it could bloat and complicate the interface. There's 4 modes and each set of settings is per slot and there are 8. Another solution internal to Mimic I suggested in this thread would be an option to "send slices to slots" so you could at least have independent control over 8 slices which would then be loaded in each of the 8 slots. A workaround for now is to load the same sample into each slot and defined the start and stop times to be different slices. Again, that's also not great for workflow.
But yeah, if I am very focused on slices and independent control of slices, I'd move to Dr. Octo Rex.
Same here. Tried it with a few loops, melodic and percussive and right away I got something out of it that would work for me in electronic music. So for me, this device is a cool addition in Reason.selig wrote: ↑17 Aug 2021Inspiring tools and impressive feature sets don't always go hand in hand. Pro Tool has a much more impressive feature set, I've used it professional since the early 1990s. Yet, I can count the number of songs I've written in PT on one hand, literally. Reason OTOH, with/despite all its limitations still inspires me. Frustrates me too, but more often inspires.
When testing Mimic I immediately created a cool little song, and that has long been my test of any new instrument - can it inspire a song (features be damned).
I just wish they add copy and paste for slots. I mean isn't it obviuos that it should be there from version 0.1?
tx
M
But like I suggested, why couldn't the Octorex get a timestretch - otherwise it wouldn't do what I wanted.joeyluck wrote: ↑17 Aug 2021Mimic doesn't need to replace Dr. Octo Rex. I think if they can incorporate a REX export (in addition to import of full REX files), that would be great for workflow, so then if you decide you want independent control over slices, you would move directly to Dr. Octo Rex from Mimic (rather than having to go to the sequencer to create a REX loop).
I think inside Mimic it could bloat and complicate the interface. There's 4 modes and each set of settings is per slot and there are 8. Another solution internal to Mimic I suggested in this thread would be an option to "send slices to slots" so you could at least have independent control over 8 slices which would then be loaded in each of the 8 slots. A workaround for now is to load the same sample into each slot and defined the start and stop times to be different slices. Again, that's also not great for workflow.
But yeah, if I am very focused on slices and independent control of slices, I'd move to Dr. Octo Rex.
Yes, I agree. Just wanted to make a distinction between what mimic has to do and the timeline to explain to anyone reading why it might not be able to match its timestretch.selig wrote: ↑17 Aug 2021I get it, the last process is in real time - but it's not the same "real time" as you get with pitching a live signal with something like Polar etc. THAT is real time pitch shifting - if you have to load a sample or record audio to the timeline first, then one part of the process is non real time, no?
Compare running a speaker and mics to a live room and sending an audio signal into that room and returning the microphones to the mix in real time, vs shooting a swept sine into the same room with the same speakers/microphones, de-convolving that audio signal, and later convolving it with live audio to create the same effect - but decidedly not in "real time" when compared to the "real" real time. I guess it depends on when you start the clock whether or not you call it real time…
Another interesting detail it's that unlike Neptune and polar, there's no option for the quality of the process. That would have allowed the option of low latency playing, but a higher quality bounce.
I had a lot of fun with it, feeding mimic decades old vocal recordings and turning them into playable vocal instruments.
I used to do a bit of that with Grain, but this handles vocals much better, as well as instrument sounds.
I get what you're saying. I've been using Mimic mostly in pitch mode/multi-pitch. Do you want each slice to also have a different stretch mode and speed? If not, another sort of workaround would be to assign the stretch you want in the sequencer and then create a REX file out of that.Heigen5 wrote: ↑17 Aug 2021But like I suggested, why couldn't the Octorex get a timestretch - otherwise it wouldn't do what I wanted.joeyluck wrote: ↑17 Aug 2021
Mimic doesn't need to replace Dr. Octo Rex. I think if they can incorporate a REX export (in addition to import of full REX files), that would be great for workflow, so then if you decide you want independent control over slices, you would move directly to Dr. Octo Rex from Mimic (rather than having to go to the sequencer to create a REX loop).
I think inside Mimic it could bloat and complicate the interface. There's 4 modes and each set of settings is per slot and there are 8. Another solution internal to Mimic I suggested in this thread would be an option to "send slices to slots" so you could at least have independent control over 8 slices which would then be loaded in each of the 8 slots. A workaround for now is to load the same sample into each slot and defined the start and stop times to be different slices. Again, that's also not great for workflow.
But yeah, if I am very focused on slices and independent control of slices, I'd move to Dr. Octo Rex.
All I'd like to do is timestretching and pitchshifting every slice, not sure if there's a need to have a different stretch-mode for every slice, but if I'd like to work on the vocal samples, I could then change the pitch and keep the sample speed too. There's a semi-knob for the pitch, - I quess I could automate it, but then again, we could just get an update for to the Slice-mode so we could click on every slice to pick a note for them.joeyluck wrote: ↑17 Aug 2021I get what you're saying. I've been using Mimic mostly in pitch mode/multi-pitch. Do you want each slice to also have a different stretch mode and speed? If not, another sort of workaround would be to assign the stretch you want in the sequencer and then create a REX file out of that.
I haven’t used it yet.
At the mo it looks like I’ll be waiting till R13 for the privilege.
But I was waiting to hear whether the stretch algorithms in mimic surpass what can be done in the sequencer.
That would be such a Reason move. To improve stretch and pitch but bury it in a new device so you have to bounce everything through it like it is with resampling.
What’s the general consensus?
At the mo it looks like I’ll be waiting till R13 for the privilege.
But I was waiting to hear whether the stretch algorithms in mimic surpass what can be done in the sequencer.
That would be such a Reason move. To improve stretch and pitch but bury it in a new device so you have to bounce everything through it like it is with resampling.
What’s the general consensus?
Perpetual Reason 12 Beta Tester
You can check out my music here.
https://m.soundcloud.com/ericholmofficial
Or here.
https://www.youtube.com/channel/UC73uZZ ... 8jqUubzsQg
You can check out my music here.
https://m.soundcloud.com/ericholmofficial
Or here.
https://www.youtube.com/channel/UC73uZZ ... 8jqUubzsQg
Copy/paste for slots +1
Mimic makes more sense once you start to think of the new Combinator. On its own Mimic maybe isn't very interesting, but within the modular playground that is Reason the only limitation is lack of imagination. With custom GUIs and customizable macro scripting, you will be able to build more complex sampler devices using a combinator. I think the new Combinator will bring the refill market back from the dead because of this ability to package custom devices with unique functionality inside a GUI.
Ya know, I was thinking Dr. Octo Rex stretched slices when pitching, but it doesn't does it? It slows them down or speeds them up. That would be handy if they incorporated stretch in some basic form. In terms of per slice controls in Mimic, how do you think that should be presented?Heigen5 wrote: ↑17 Aug 2021All I'd like to do is timestretching and pitchshifting every slice, not sure if there's a need to have a different stretch-mode for every slice, but if I'd like to work on the vocal samples, I could then change the pitch and keep the sample speed too. There's a semi-knob for the pitch, - I quess I could automate it, but then again, we could just get an update for to the Slice-mode so we could click on every slice to pick a note for them.joeyluck wrote: ↑17 Aug 2021
I get what you're saying. I've been using Mimic mostly in pitch mode/multi-pitch. Do you want each slice to also have a different stretch mode and speed? If not, another sort of workaround would be to assign the stretch you want in the sequencer and then create a REX file out of that.
With no expectations and with some playing around with device, I've learned that you can sculpt some really interesting sounds by resampling with different stretch modes, speed, and other modulations.
I sent audio out of my interface directly back into my interface with a short TRS, and recorded the input into Mimic (noise with artifacts basically). Then I hooked up the sampling input in the HW I/O to control out and then resampled over and over until I had some really cool stuff to work with. Try Melody stretch mode and 1 octave up on the keyboard, record that and then sample it slowed down back to where it originally was.
It's particularly good for glitch and bass sounds
I sent audio out of my interface directly back into my interface with a short TRS, and recorded the input into Mimic (noise with artifacts basically). Then I hooked up the sampling input in the HW I/O to control out and then resampled over and over until I had some really cool stuff to work with. Try Melody stretch mode and 1 octave up on the keyboard, record that and then sample it slowed down back to where it originally was.
It's particularly good for glitch and bass sounds
Last edited by aeox on 17 Aug 2021, edited 1 time in total.
I find Reason's time stretch as good as Serato Sampler which uses Zynaptique and Loopclood which uses Zplane. Ableton Live also uses Zplane time stretching. Reason has one of the better algorithms but it wasn't designed for extreme time stretching.plaamook wrote: ↑17 Aug 2021
But I was waiting to hear whether the stretch algorithms in mimic surpass what can be done in the sequencer.
That would be such a Reason move. To improve stretch and pitch but bury it in a new device so you have to bounce everything through it like it is with resampling.
What’s the general consensus?
So with Mimic we will get an additional timestretch algorithm
It's true, Dr. Octorex doesn't stretch at all when changing pitch. And another bad is that it would be a pretty big workflow killer to use Recycle for that purpose. So how do I think it should be implemented for the slice-mode? I'd just want to select a slice/s of choice and turn a mini knob per slice to pitch-shift/stretch them. There's enough space on the bottom of the slices to add a small knob for that.joeyluck wrote: ↑17 Aug 2021Ya know, I was thinking Dr. Octo Rex stretched slices when pitching, but it doesn't does it? It slows them down or speeds them up. That would be handy if they incorporated stretch in sole basic form. In terms of per slice controls in Mimic, how do you think that should be presented?Heigen5 wrote: ↑17 Aug 2021
All I'd like to do is timestretching and pitchshifting every slice, not sure if there's a need to have a different stretch-mode for every slice, but if I'd like to work on the vocal samples, I could then change the pitch and keep the sample speed too. There's a semi-knob for the pitch, - I quess I could automate it, but then again, we could just get an update for to the Slice-mode so we could click on every slice to pick a note for them.
-
- Posts: 728
- Joined: 05 Sep 2017
To anyone reading our opinions about Mimic's stretch quality - I based my observations off seeing other people use it, whereas Giles has actually done a direct A/B comparison - trust his opinion of the quality more than mine!!
Secondly, I did not mean "100% realtime, like an effects device" - but I should have been clearer. I meant "rendered on the fly" vs "rendered offline". But then I muddied the waters by going off on a tangent about using plugins.
So if I confused anyone, "realtime" pitch-shifting like an audio effect is related to, but not the same as precalculated stretch/pitch processing like a sampler or DAW. Neither can actually be done 100% "real" time; simply because they need enough audio data to start making the calculations. In the case of a sampler, it gets that data when you import the audio, so playback is latency-free, but with a CPU hit for complex algorithms especially if you are doing lots of polyphony and modulation.
Secondly, I did not mean "100% realtime, like an effects device" - but I should have been clearer. I meant "rendered on the fly" vs "rendered offline". But then I muddied the waters by going off on a tangent about using plugins.
So if I confused anyone, "realtime" pitch-shifting like an audio effect is related to, but not the same as precalculated stretch/pitch processing like a sampler or DAW. Neither can actually be done 100% "real" time; simply because they need enough audio data to start making the calculations. In the case of a sampler, it gets that data when you import the audio, so playback is latency-free, but with a CPU hit for complex algorithms especially if you are doing lots of polyphony and modulation.
selig wrote: ↑17 Aug 2021I get it, the last process is in real time - but it's not the same "real time" as you get with pitching a live signal with something like Polar etc. THAT is real time pitch shifting - if you have to load a sample or record audio to the timeline first, then one part of the process is non real time, no?avasopht wrote: ↑17 Aug 2021Yes but Mimic and Kontakt are doing timestretch on a per-note basis and can respond to the pitch wheel and portamento. Even if you've loaded the sample into RAM and processed the hell out of it, it still has to perform a timestretch in real-time based on the note and pitch offset.
They open up new ways of viewing Reason for sure but this great zooming and transient detection should really exist (in this way) in the sequencer as well, and tempo detection would help me so much right now - doing that manually when remixing and when restoring old recordings. Something I really would enjoy would also be a per part EQ & effect lab in the sequencer, instead of having to cut out pieces, assign to new tracks, find effects etc. I'd rather wait a while for R12 to be massive.
Also it would be very useful to have random noise as a mod source. The current random LFO is just not the same. Something just like we can modifying parameters with noise in europa and most other synths.
Put a mod matrix on the back to replace all that text.
With copy/paste to slots, mod matrix, and audio rate noise to modify parameters I wouldn't have many other critiques.
In my mind it's not the sampler to replace the more feature rich options out there(which I don't use at all anyway). I'm still getting nice results by resampling with different speeds and stretch modes.
Some might be disappointed by the seemingly bare feature set, but don't let that stop you from creating some unique and interesting textures and sounds. If they want to make a nnxt replacement, I'm sure they have been toying with the idea and working on something in the background. This isn't it!
Put a mod matrix on the back to replace all that text.
With copy/paste to slots, mod matrix, and audio rate noise to modify parameters I wouldn't have many other critiques.
In my mind it's not the sampler to replace the more feature rich options out there(which I don't use at all anyway). I'm still getting nice results by resampling with different speeds and stretch modes.
Some might be disappointed by the seemingly bare feature set, but don't let that stop you from creating some unique and interesting textures and sounds. If they want to make a nnxt replacement, I'm sure they have been toying with the idea and working on something in the background. This isn't it!
-
- Posts: 3799
- Joined: 20 Oct 2017
- Location: Norway
- Contact:
Aeox: That noise random modulation from Europa you mention, can it be sent out as CV from it or other instruments? If you can, you can already use it to modulate mimic.
And I should add that I tested things quick quickly using a few different drum loops (audio files, not REX) that i had laying around for testing such things. So my "analysis" was by no means exhaustive on any level!chaosroyale wrote: ↑18 Aug 2021To anyone reading our opinions about Mimic's stretch quality - I based my observations off seeing other people use it, whereas Giles has actually done a direct A/B comparison - trust his opinion of the quality more than mine!!
Secondly, I did not mean "100% realtime, like an effects device" - but I should have been clearer. I meant "rendered on the fly" vs "rendered offline". But then I muddied the waters by going off on a tangent about using plugins.
I think that the history of "real time" is also adding to MY confusion/understanding of the term. For instance, convolution reverbs used to not be able to convolve (the final step) in real time, so when they could finally do this it was called "real time" processing (to explain the final stage of the process). Same for time stretching, which used to be possible only as an offline process (especially for highest quality renders).
So it is indeed remarkable to have even a part of the process be "real time", and I did not intend to imply it's not a big deal.
But I will add that I find it most interesting the "real time" stretching of Mimic to be able, at least in some cases, to beat the offline calculations used behind the scenes for audio on the timeline. Again, it's horses for courses and always desirable to have more options in this arena.
Selig Audio, LLC
-
- Information
-
Who is online
Users browsing this forum: NLAlston and 18 guests