Performing Live - Depths of the Rabbit Hole

This forum is for discussing Reason. Questions, answers, ideas, and opinions... all apply.
Post Reply
User avatar
Raveshaper
Posts: 1089
Joined: 16 Jan 2015

20 Feb 2015

I have spoken about this here and there, but I wanted to start an official thread that's geared more toward those interested in live performance to gauge interest in the topic. I may be repeating myself, my schedule has been busy.

Anyone who has input on performing live with Reason can chime in here or link to alternate threads.

I have been developing a template for live performance and improvised composition for quite some time now. The goal was to try and get an "Ableton Live-like" experience in Reason in terms of live performance, and a much faster and more improvised way to compose track ideas. While anyone can control a Kong or play notes into a synth, I really pushed myself to come up with something more comprehensive.

I lack outboard gear, so I had to emulate it myself. I built a virtual DJ setup with two decks, a crossfader, four layers of sound per deck (or up to eight total using only one deck) with each sound independently assignable, an effects chain with seven effect types in series that can be sent to and returned on either deck or both at once, an advanced dual function switchable rotary input system that allows eight knobs to act like sixteen, a realtime vinyl effect to perfectly match pitchbend and track tempo for smooth playback of Dr. OctoRex (no gaps or glitches), and a cue mix/sound auditioning feature for multichannel audio interfaces.

My sounds can be triggered in the normal one-shot style, with a variable speed note repeat using rotaries for rate changes, or an optional toggled pressure sensitivity mode that allows aftertouch to control the repeat rate. These note repeat modes are globally applied to each deck using two dedicated note repeat signals.

Everything can be recorded in real time, almost too well at the moment. I still need to improve the way rotaries record so that knobs I'm not rotating don't overwrite themselves, but this should be a minor thing. My latest breakthrough was the vinyl effect. It can be used for some interesting phase shifts once it's been recorded and the tempo change is disabled. Everything remains at the same pitch, but with smoothly changing artifacts. I'll have to make an audio example and link it here.

This is a very complicated system I have come up with and it ties into nearly every topic of using Reason except for development of REs. MIDI maps and codecs, face-melting CV networks, brain-twisting audio routing, and a fair amount of things on the no-no list (such as heavily distorted infinite feedback loops) -- it's all here. And it only takes 2 bars of DSP to run it without samples loaded.

All this considered, I admit this is not for everyone. In fact, I would go so far as to say that this isn't for anyone at all and that I want to make this easier on people by developing a better way of doing all this. But in the mean time, I can share what I know. It'll take a while to write it all up in the tutorials section, but I'll get there eventually. If anyone is interested.
:reason: :ignition: :re: :refillpacker: Enhanced by DataBridge v5

jallen97
Posts: 61
Joined: 17 Jan 2015

21 Feb 2015

Sounds very interesting to me…

User avatar
Raveshaper
Posts: 1089
Joined: 16 Jan 2015

21 Feb 2015

Me thinks my dire warnings of complex difficulty have scared off the audience. Even if you're the only one who reads my articles, I'll write them up. It'll take time.
:reason: :ignition: :re: :refillpacker: Enhanced by DataBridge v5

User avatar
PSoames
Posts: 278
Joined: 15 Jan 2015
Location: Somerset, UK

22 Feb 2015

QwaizanG wrote:Me thinks my dire warnings of complex difficulty have scared off the audience. Even if you're the only one who reads my articles, I'll write them up. It'll take time.
Go for it. Don't be put off by the silence.

Stuff like this will serve as a permanent record of technique and inspiration for others. I for one would read your articles.

User avatar
machinestatic
Posts: 30
Joined: 17 Jan 2015

23 Feb 2015

QwaizanG wrote:Me thinks my dire warnings of complex difficulty have scared off the audience. Even if you're the only one who reads my articles, I'll write them up. It'll take time.
PSoames wrote: Go for it. Don't be put off by the silence. Stuff like this will serve as a permanent record of technique and inspiration for others. I for one would read your articles.
Indeed. Don't take the lack of replies the wrong way. I read your whole post and I'm intrigued, but I just didn't reply because it wouldn't really add much to say "awesome" and nothing else.

But now that you've guilt tripped me... (just kidding)

Awesome!

I'd love to see a video of this. Sometimes I worry that those of us who are currently learning about Reason, but aren't learning Ableton at all, are handicapping ourselves if/when we attempt to perform live in the future. So this would be very interesting. 

User avatar
Raveshaper
Posts: 1089
Joined: 16 Jan 2015

23 Feb 2015

Guilt trip wasn't my intention but I guess it does read like one.

Making a comparison to Live is unfair of me to make since I have not had experience with it yet, but what I've been working on using every method built into the stock Reason experience feels light years ahead of my previous use of the program and pushes it into a more Live-like experience.

Video production is a bit lacking for me at the moment but I will definitely invest in being able to do that.
:reason: :ignition: :re: :refillpacker: Enhanced by DataBridge v5

User avatar
Raveshaper
Posts: 1089
Joined: 16 Jan 2015

23 Feb 2015


In the meantime, let me shamelessly plug myself.
My latest work.
:reason: :ignition: :re: :refillpacker: Enhanced by DataBridge v5

User avatar
bouwie
Posts: 32
Joined: 17 Jan 2015

24 Feb 2015

I use a laptop and edirol prc 300 only on stage in a 5 person band.
there are some backing tracks of vocals synths drums guitars on audio tracks.
I put them in blocks. So every block is a song. A shame i can't trigger next block from my controler. It can only do by mouse.
And for every song i've made a combinator with al stuff in there i play real time. Al knobs buttons and cv of the combinator are used to mute solo balance trigger sounds in the combinator. A shame that there are no more cvinputs in a combinator. When i have to chance a song. I press a button to switck to a new patch of the combinator and select a new block with the mouse and i an ready to go in about 10 seconds.
I also play keys in a two person band and figuring out how to make a setup for live playing in that situation. It's just me and a female vocalist so i want to make the show as live as posible. Not just play along a patch with a pre recorded wav. file.

electrofux
Posts: 863
Joined: 21 Jan 2015

25 Feb 2015

While i am not performing Live i tried to built a setup for production which follows a more Live like workflow wiht loads of realtime stepsequencing launchpads, ipads etc etc.

There are so many barriers to overcome it is not funny. Remote controlled Stepsequencing is pretty much only possible with Euclid and Thor and these are single Channels which means you need loads of remote codecs. They are 16 and 32 steps only with no pattern switching. I found a complicated Volt SL 1 solution for the step problem. Note to track is another well known problem.

The absence of a proper punch in/out recording is another Problem which needs Midiloopback and another Volt SL setup.

There is also no proper Looper in Reason. Ok there is Ochen Ks repeater but it is also lacking punch in and out and doesnt keep it audio.

Now recently i have digged into Live and Push and while Live has its own areas where it lacks copmpared to Reason the Push and Session view part is pretty much genious. Especially session view should be accepted as a standard and needs to be adressed by the Props but this is nothing really new.

I am even considering getting Live after so many tries to build a nice Reason only setup. Thats why i am trying hard to integrate the Reason devices with Live which opened a new can of worms ;-)

User avatar
Raveshaper
Posts: 1089
Joined: 16 Jan 2015

25 Feb 2015

electrofux,

You can simplify your setup somewhat by developing your codecs in LUA. That format allows you to define bidirectional MIDI that is required for things like visual feedback while hitting pads, etc, but understanding how to use it requires applying to be a remote developer.

You may be able to improve performance of your sample/instrument triggering by using a rotary encoder to dial through which device gets accessed by the midi loop. You would need a codec to bind to the receiving combinator patch. Also you may be able to exploit the additional override mappings for loading next and previous patches for quick access to different sounds you've designed.

I'm a bit busy still but I'll put up something soon. It might be helpful for the issues you're having, but if you're using multiple codecs already it's probably much of the same. You're using methods that go beyond my experiments (iPad, multiple hardware) so if anything you could add to my material once it's all up.
:reason: :ignition: :re: :refillpacker: Enhanced by DataBridge v5

electrofux
Posts: 863
Joined: 21 Jan 2015

26 Feb 2015

QwaizanG wrote:electrofux, You can simplify your setup somewhat by developing your codecs in LUA. That format allows you to define bidirectional MIDI that is required for things like visual feedback while hitting pads, etc, but understanding how to use it requires applying to be a remote developer. You may be able to improve performance of your sample/instrument triggering by using a rotary encoder to dial through which device gets accessed by the midi loop. You would need a codec to bind to the receiving combinator patch. Also you may be able to exploit the additional override mappings for loading next and previous patches for quick access to different sounds you've designed. I'm a bit busy still but I'll put up something soon. It might be helpful for the issues you're having, but if you're using multiple codecs already it's probably much of the same. You're using methods that go beyond my experiments (iPad, multiple hardware) so if anything you could add to my material once it's all up.
Yeah, i am actually a Remote Developer and use Lua and the codec specifics all the time and can make a Launchpad blink like i want. You have no idea how often i have asked for eg remotable steps in Redrum and mailed Re Developers for this and that remote feature :-) But most sequencing devices lack these features. And there is no Session view, probably the standard for live performance. And the looper(s) are not good enough to do the stuff eg Reyne does. Reason needs a "Live" oriented update.

User avatar
Raveshaper
Posts: 1089
Joined: 16 Jan 2015

26 Feb 2015

When a wealth of those features already exist elsewhere, it seems cost effective to go with the platform that pioneered or innovated upon the standard of session view.

Suddenly, it seems like a heck of a mountain and it may be too steep for me to want to climb it. It's those little day to days that add up.

Edit: I had hope that gaining more knowledge could simplify or improve my workarounds and legitimize them a bit in terms of practical tools and understandable interfaces. But now, it looks dubious.
:reason: :ignition: :re: :refillpacker: Enhanced by DataBridge v5

electrofux
Posts: 863
Joined: 21 Jan 2015

27 Feb 2015

QwaizanG wrote:When a wealth of those features already exist elsewhere, it seems cost effective to go with the platform that pioneered or innovated upon the standard of session view. Suddenly, it seems like a heck of a mountain and it may be too steep for me to want to climb it. It's those little day to days that add up. Edit: I had hope that gaining more knowledge could simplify or improve my workarounds and legitimize them a bit in terms of practical tools and understandable interfaces. But now, it looks dubious.
Different people work different. There a  alot of different approaches to playing live. And i bet there are people that put Reason to good use live. It just depends what you expect it to do.

I also don't know if it would be cost effective to use live only, since you probably would have to invest into VSTis. I don't find that the stock devices are enough.

User avatar
Raveshaper
Posts: 1089
Joined: 16 Jan 2015

27 Feb 2015

I wouldn't mind the option of investing in VSTi. I can *carry them* across different platforms as I expand my knowledge base. But I agree, Live by itself isn't the answer.
:reason: :ignition: :re: :refillpacker: Enhanced by DataBridge v5

Post Reply
  • Information
  • Who is online

    Users browsing this forum: No registered users and 26 guests