Question:Can REASON used a DSP audio card ?

This forum is for discussing Reason. Questions, answers, ideas, and opinions... all apply.
User avatar
pjeudy
Posts: 1559
Joined: 17 Jan 2015

12 Feb 2015

Other then getting a faster processor and more Memory....Is there an audio card that can help with effects for example which can slow things up! by taking some of the load off the cpu? thanks.
My opinion is that Propellerhead REASON needs a complete rewrite!
P.S: people should stop saying "No it won't happen" when referring to a complete rewrite of REASON. I have 3 letters for ya....VST
Mon Dec 11, 2017 1:53 pm

User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

12 Feb 2015

Nope.

A faster CPU is a better deal anyway. It can be used for more things, and those DSP cards are not 0 latency.

User avatar
pjeudy
Posts: 1559
Joined: 17 Jan 2015

12 Feb 2015

ScuzzyEye wrote:Nope.

A faster CPU is a better deal anyway. It can be used for more things, and those DSP cards are not 0 latency.
Thanks ScuzzyEye ! 
My opinion is that Propellerhead REASON needs a complete rewrite!
P.S: people should stop saying "No it won't happen" when referring to a complete rewrite of REASON. I have 3 letters for ya....VST
Mon Dec 11, 2017 1:53 pm

User avatar
Last Alternative
Posts: 1343
Joined: 20 Jan 2015
Location: the lost desert

12 Feb 2015

OK. I was actually going to make my own thread about DSP but I'll just ask here- I have an i7 4770, 16 gigs of RAM, OCZ SSD, over 80 gigs of free space, AMD Saffire Radeon 6670, and Windows 7 Ultimate. Everything is up to date, yet somehow my songs are all ramping up and down from 3-5 bars! Problem is there are only 6 bars.... I don't have anything else running or open so i don't get it. And my songs are anywhere from 15- 23 or so tracks total and I only use about 1-2 REs/stock devices for each one; nothing on the master bus besides the SSL compressor at 2:1. And I keep my computer clean with disc cleanup and scan for viruses and whatnot on a regular basis.

Some body help! It's been freaking me out lately. Oh and I record & mix in 88.2 with around a 170 sample rate.
https://lastalternative.bandcamp.com
:reason: 12.7.4 | MacBook Pro (16”, 2021), OS Sonoma, M1 Max, 4TB SSD, 64GB RAM | quality instruments & gear

User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

12 Feb 2015

Maybe recording at 88.2 is a help, and the low latency can be of a benefit, if you're monitoring through Reason.

But after you're done recording, drop down to 44.1, and increase the buffer. If you're not monitoring through Reason while recording (like if you monitor using your audio interface), leave the buffer much larger. Reason will automatically keep recorded audio in line with sequenced tracks based on the audio interface latency when using external monitoring. (And you can bump things around a bit after the recording is done, to make it perfect.)

You can bounce export at 192 kHz, and re-import if you're worried about aliasing of the internal instruments for your final mix-down.

User avatar
Last Alternative
Posts: 1343
Joined: 20 Jan 2015
Location: the lost desert

12 Feb 2015

^ I failed to mention of course I have a Focusrite Scarlett 2i4 so yes I monitor thru Reason...? I always thought the whole song should be recorded AND mixed above 44.1 so as to hand over a high quality audio file to the mastering engineer. No? Also, if I'm wrong, when exporting to WAV for him do you then put it on 88.2? (that is the highest I go)

Maybe I'm confused about all of this. I only use my interface for everything audio from Reason to watching movies.
And by increase the buffer do you mean actually to go to the smallest sample rate #? (89 I think)
https://lastalternative.bandcamp.com
:reason: 12.7.4 | MacBook Pro (16”, 2021), OS Sonoma, M1 Max, 4TB SSD, 64GB RAM | quality instruments & gear

User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

12 Feb 2015

That interface should be able to monitor locally. Just use the mixer to send the input directly to the output when you're recording. But that isn't a big deal, especially if you can mute the tracks of many of the instrument when doing recording.

When I say recording, I mean the inputs on your audio interface, this doesn't apply to recording MIDI. If you record at 88.2, the audio will be stored in the song file at that depth. But re-rendered to what ever you change your current rate to.

It's only important that recording is done at a higher rate to avoid the possibility of aliasing. The analog low-pass filters in the Focusrite are good enough that it shouldn't be a problem at 44.1, but yeah, 88.2 is safer. But once the audio has been captured without audible aliasing, you can digitally low-pass it, and down-sample without worry about introducing aliasing. That's all sampling rate gives you: the ability to capture higher frequencies without artifacts. 44.1 kHz can capture up to 22 kHz.

Once all your audio is captured it doesn't matter what sampling rate you use in Reason. Just consider that your working rate. It has no bearing on what you export. You can export at any rate. Audio tracks will be re-sampled to what ever rate you choose, from the rate at which they were recorded, not the current working rate. So if you recorded at 88.2, and work at 44.1, but export at 88.2, the audio won't have to be re-sampled for export.

As for the buffer, it's in the Preferences inside of Reason, on the Audio tab. It's under the Sample rate, it says Buffer size. It's measured in samples, and there are displays showing you how much latency is involved. This is latency external to Reason. Basically, how long after Reason makes a sound that you hear it, and how long before a sound going into the mic that it is before Reason receives it.

If you're tracking something live in a mic, you want the latency to be sub 10 ms (because the round trip will add up to 20 ms, and that's the threshold where most people start noticing a delay). If you're recording MIDI on a keyboard 20 ms is good enough. But once everything is recorded, and you're working on mixing and sound design, go ahead and increase the buffer as high as you like. The only place there will be any latency is between when you hit play, and start hearing the audio. A tenth of a second isn't going to matter.

If you want your synths internal to Reason to sound their absolute best, you can bounce their tracks to disk at 192, and re-import them into Reason. Reason will then low-pass, and re-sample to what ever rate your mastering engineer wants.

Long story, short. You can work at what ever rate makes things easier for your computer, and it will have no effect on the final export.

User avatar
Theo.M
Posts: 1102
Joined: 16 Jan 2015

12 Feb 2015

Last Alternative wrote:OK. I was actually going to make my own thread about DSP but I'll just ask here- I have an i7 4770, 16 gigs of RAM, OCZ SSD, over 80 gigs of free space, AMD Saffire Radeon 6670, and Windows 7 Ultimate. Everything is up to date, yet somehow my songs are all ramping up and down from 3-5 bars! Problem is there are only 6 bars.... I don't have anything else running or open so i don't get it. And my songs are anywhere from 15- 23 or so tracks total and I only use about 1-2 REs/stock devices for each one; nothing on the master bus besides the SSL compressor at 2:1. And I keep my computer clean with disc cleanup and scan for viruses and whatnot on a regular basis.

Some body help! It's been freaking me out lately. Oh and I record & mix in 88.2 with around a 170 sample rate.
well the problem is 88.2 and reason is an impossible task even for powerful machines. It's just an absolute cpu hog mate and that's the way it is. I am currently at 44.1 and 9 tracks and already can't play a song I am working on back. Has to be bounced. 

User avatar
Last Alternative
Posts: 1343
Joined: 20 Jan 2015
Location: the lost desert

12 Feb 2015

^Interesting. I let the whole song play when I'm recording to feel the whole vibe while playing. OK so now I know I can record 88.2 and then switch to 44.1 to mix. And use a higher buffer, then export 88.2 for the mastering dude. And I know where the preferences are; that's why I was asking about it. Thanx for the info!
https://lastalternative.bandcamp.com
:reason: 12.7.4 | MacBook Pro (16”, 2021), OS Sonoma, M1 Max, 4TB SSD, 64GB RAM | quality instruments & gear

User avatar
QVprod
Moderator
Posts: 3496
Joined: 15 Jan 2015
Contact:

12 Feb 2015

Last Alternative wrote: I always thought the whole song should be recorded AND mixed above 44.1 so as to hand over a high quality audio file to the mastering engineer. No? Also, if I'm wrong, when exporting to WAV for him do you then put it on 88.2? (that is the highest I go)
I think it's worth stating that you don't actually have to record, mix, or even export at anything over 44.1k to give to a mastering engineer. 44.1K 24 bit .wav files are fine and "high quality" enough. The higher rates, while are more accurate as far as amount of samples are concerned, generally don't add any additional audible recording quality but do significantly increase your cpu load and file sizes. What you don't want to give a mastering engineer is anything with dither applied. 



User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

13 Feb 2015

QVprod wrote:The higher rates, while are more accurate as far as amount of samples are concerned, generally don't add any additional audible recording quality but do significantly increase your cpu load and file sizes. What you don't want to give a mastering engineer is anything with dither applied.
Sample rate = Accuracy, is a common misconception. It may not be what you meant, but I just wanted to make clear.

The only thing that sampling rate does is raise the highest frequency that can be represented. That being half the sampling rate (because it takes two samples to plot a sine slice). Content below that limit is not stored more accurately.

If it helps you think of it, a 22.05 kHz sine wave created at 44.1 kHz will have samples at each peak and each valley. If instead a 11.025 kHz sine was created, it would have samples at the peaks, valleys, and then half way up the rising side, and half way down the falling side. You could re-sample that 11.025 kHz sine to 22.05 kHz by dropping the 0-crossing samples, and you'd still have exactly the same sound. Because those extra samples weren't adding more accuracy, they were simply falling on the same path dictated by the samples that actually represented the waveform. That's what happens with any signal that is below the Nyquist limit, the extra samples already lie on the waveform plotted by the required samples.

In fact this is part of how MP3 encoding works. The complex audio is split into multiple bands. The lower frequency ones need less cosine coefficients stored to accurately recreate the original samples.

User avatar
QVprod
Moderator
Posts: 3496
Joined: 15 Jan 2015
Contact:

13 Feb 2015

QVprod wrote:The higher rates, while are more accurate as far as amount of samples are concerned, generally don't add any additional audible recording quality but do significantly increase your cpu load and file sizes. What you don't want to give a mastering engineer is anything with dither applied.
ScuzzyEye wrote: Sample rate = Accuracy, is a common misconception. It may not be what you meant, but I just wanted to make clear.
Part of me thought that might have been a bit misleading to post. Good catch to clarify. In addition to the nyquist, sampling rate works similar to frame rate on video. the more frames there are the better (with more detail) a motion is captured- theoretically. Same with audio, digital recording doesn't record a perfect sine wave but a collection of samples that make up that sine wave, just like video is a collection of pictures capturing motion. That said, a standard is set where the human can't perceive any difference.

The two are obviously directly related though, two sides to the same coin perhaps.

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

13 Feb 2015

Instance wise a modern CPU can lift a lot more DSP processing than a "DSP card", unless you buy a whole lot of ProTools DSP racks.. The main advantage of DSP processors is that they can do the processing with a lot less latency than a CPU because a CPU isn't really made for low latency processing, rather for high overall throughput. Additionally you can't just put in a DSP card and expect your DSP processes to magically be offloaded to the DSP card. The software has to be written for the DSP system. In most "native" hosts that means you insert VST or AU plugins which route the data into the DSP system and back into the CPU (which adds latency) or you have an audio interface that does the DSP processing before the audio goes into the computer (e.g. UAD Apollo). You can use an Apollo with Reason this way no problem but it will only allow you to use the UAD plugins with the DSP processors (as said).
QVprod wrote:Part of me thought that might have been a bit misleading to post. Good catch to clarify. In addition to the nyquist, sampling rate works similar to frame rate on video. the more frames there are the better (with more detail) a motion is captured- theoretically. Same with audio, digital recording doesn't record a perfect sine wave but a collection of samples that make up that sine wave, just like video is a collection of pictures capturing motion. That said, a standard is set where the human can't perceive any difference.

The two are obviously directly related though, two sides to the same coin perhaps.
Actually video frame rate and sample rate are not at all related and doing this analogy is basically causing the confusion people have about digital audio. For audio you *DO* have all the values in between samples, the ONLY thing that the sample rate defines is what the highest captured frequency is.

And to clear up confusion about dithering, dithering is only relevant for the BIT DEPTH of the audio, it has nothing to do with the sample rate. Furthermore the bit depth of the audio ONLY defines the signal to noise ratio. No stair steps, no "accuracy", these two only define the highest frequency and the signal to noise ratio, otherwise sigital audio yields EXACTLY the same thing as analog audio: Perfect, beautiful waves.

Lemme pull out that great Xiph video on the topic :s0403:


User avatar
Concep
Posts: 105
Joined: 17 Jan 2015

13 Feb 2015

If you are maxing out your 4770 CPU, then you are probably using a few very high CPU rack extensions.  Certain REs can really chew up CPU power.  What are the main REs that you use?  We can tell you if there are any you should use sparingly.  Can you tell me if you are using the stock Intel CPU heat sink and fan on your CPU?  If you are, you might be overheating, which will drag down your CPU performance.  An aftermarket heat sink is recommended for the 4770.

I had the 4770k, but moved up to the 4790k just recently, and it gave me a little more CPU headroom that made a huge difference for my work flow.  All I had to do was update my bios and swap the CPU out.  Very easy.  It also made exporting songs a lot faster.  Upgrade cost me about $75, since I was able to sell the 4770k on ebay.  I'm very happy I did this, but I don't know if it makes sense for everyone as a 4770k is a very good CPU.  

User avatar
pjeudy
Posts: 1559
Joined: 17 Jan 2015

13 Feb 2015

normen wrote:Instance wise a modern CPU can lift a lot more DSP processing than a "DSP card", unless you buy a whole lot of ProTools DSP racks.. The main advantage of DSP processors is that they can do the processing with a lot less latency than a CPU because a CPU isn't really made for low latency processing, rather for high overall throughput. Additionally you can't just put in a DSP card and expect your DSP processes to magically be offloaded to the DSP card. The software has to be written for the DSP system. In most "native" hosts that means you insert VST or AU plugins which route the data into the DSP system and back into the CPU (which adds latency) or you have an audio interface that does the DSP processing before the audio goes into the computer (e.g. UAD Apollo). You can use an Apollo with Reason this way no problem but it will only allow you to use the UAD plugins with the DSP processors (as said)
Nice and clear thanks normen !

My opinion is that Propellerhead REASON needs a complete rewrite!
P.S: people should stop saying "No it won't happen" when referring to a complete rewrite of REASON. I have 3 letters for ya....VST
Mon Dec 11, 2017 1:53 pm

User avatar
selig
RE Developer
Posts: 11746
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

13 Feb 2015

QVprod wrote: Part of me thought that might have been a bit misleading to post. Good catch to clarify. In addition to the nyquist, sampling rate works similar to frame rate on video. the more frames there are the better (with more detail) a motion is captured- theoretically. Same with audio, digital recording doesn't record a perfect sine wave but a collection of samples that make up that sine wave, just like video is a collection of pictures capturing motion. That said, a standard is set where the human can't perceive any difference.
To add to what Normen already said, no it is not the same because a digital signal can be reconstructed into a 100% analog signal that you can look at on a scope just like any other analog signal. But a film is not intended to EVER be converted back into analog (as if that was even possible), unless you call what our brain does to connect the images a form of conversion. ;)

The confusion again comes from thinking the digital data is the final product - it's not, the final product is an analog waveform! :)
Selig Audio, LLC

User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

13 Feb 2015

QVprod wrote:...digital recording doesn't record a perfect sine wave but a collection of samples that make up that sine wave...
This might help people too. It's actually the opposite of what you say here. A sine wave is the waveform that takes the least number of samples to recreate. It doesn't have to be traced at all. It's when you want the higher harmonics (higher frequencies) of a complex sound, that you need more samples. If you only have the samples that hit the peaks and valleys (or any two parts of the wave above and below zero), you'll still get a perfectly reconstructed sine. If you want a square wave on the other hand, you need huge number of more samples to fill in each of the harmonics, which themselves are actually sine waves, just higher in frequency and lower in amplitude. So you need to store the harmonics up to the limit of human hearing.

An aside: The wavetable synth I'm working on, uses different waveforms for parts of the keyboard range. The upper octave for every table is always the sine wave, because the harmonics above that would be lost due to the limited frequency response.

User avatar
QVprod
Moderator
Posts: 3496
Joined: 15 Jan 2015
Contact:

13 Feb 2015

normen wrote: Actually video frame rate and sample rate are not at all related and doing this analogy is basically causing the confusion people have about digital audio. For audio you *DO* have all the values in between samples, the ONLY thing that the sample rate defines is what the highest captured frequency is.
QVprod wrote: Part of me thought that might have been a bit misleading to post. Good catch to clarify. In addition to the nyquist, sampling rate works similar to frame rate on video. the more frames there are the better (with more detail) a motion is captured- theoretically. Same with audio, digital recording doesn't record a perfect sine wave but a collection of samples that make up that sine wave, just like video is a collection of pictures capturing motion. That said, a standard is set where the human can't perceive any difference.
selig wrote:
To add to what Normen already said, no it is not the same because a digital signal can be reconstructed into a 100% analog signal that you can look at on a scope just like any other analog signal. But a film is not intended to EVER be converted back into analog (as if that was even possible), unless you call what our brain does to connect the images a form of conversion. ;)

The confusion again comes from thinking the digital data is the final product - it's not, the final product is an analog waveform! :)
Ok no problem admitting when I'm wrong. Believe it or not I actually learned that from a college professor in a recording class (I think, or maybe it was Physics of Music and Sound?) back when I was in school.  But for further clarity the two things I was saying are directly related was amount of samples and nyquist. The video reference was just an analogy. 

What's funny, is that the fact that I'm wrong about this supports my original post.

User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

14 Feb 2015

QVprod wrote:Believe it or not I actually learned that from a college professor...
Oh, I believe it. How digital audio works isn't intuitive. I mean we did dot-to-dots as kids, and digital samples are 2D: sampling rate, and bit-depth. It just seems like you should connect the dots. And like the more difficult dot-to-dots, the more points there are, the more detail you should get. But unless you were trying to plot sine waves between the dots, it's not the same thing.

Heck, I was guilty of spreading the wrong information at one point, because I was just repeating what people had told me. That's one reason I go to lengths to help people understand how it really works, to make up for my past sins, and to make sure the false information isn't spread any further.

Higor
Posts: 121
Joined: 19 Jan 2015

15 Feb 2015

Last Alternative wrote:Everything is up to date, yet somehow my songs are all ramping up and down from 3-5 bars! Problem is there are only 6 bars.... I don't have anything else running or open so i don't get it.
When no programs are open what are the numbers on your task manager - CPU, Memory, Disk?
 
My processor - i7 3630QM - is a little less powerful than yours, i have 8gb of ram and my numbers - with no programs open - are around:
 
CPU: 1%
Memory: 16%
Disk: 1%
 
If your numbers are above than that perhaps you need to optimize windows. Search for some youtube videos like Windows fast and clean - something like that. I always turn off automatic updates for every programs. Every change i do in the OS i always monitor in order to keep the numbers at the lowest rate possible.

Bigsby
Posts: 61
Joined: 21 Jan 2015

16 Feb 2015

normen wrote:
Actually video frame rate and sample rate are not at all related and doing this analogy is basically causing the confusion people have about digital audio. For audio you *DO* have all the values in between samples, the ONLY thing that the sample rate defines is what the highest captured frequency is.
Just trying to wrap my head around this.

Since digital audio is just a series of samples converted to an analog audio wave, it would seem to logically follow that the resultant wave is recreated via interpolation (i.e. smooth curves between the samples, not straight lines or stair steps like in the video above).  It would further follow that distortion of the original analog audio wave, no matter how minute, would result from that interpolation.

I keep reading and hearing that digital audio at a sample rate above 44.1 kHz is indistinguishable to human hearing from the same audio at 44.1.  Does this mean that interpolation distortion of the analog wave only occurs above 22 kHz in the audio frequency spectrum at a 44.1 kHz sample rate?  In other words, any additional wiggling of the wave between the samples is happening so fast it's too high to hear?

In the latest TapeOp, Bob Ludwig, the famous mastering engineer, states that you can hear the difference between higher resolution audio and CD when you relax and listen to the high resolution audio for a longer period of time, then switch over to the same audio at CD quality.  Presumably you'd have to be listening to it on nice equipment.  He states that quickly switching back and forth makes it difficult or impossible for the brain to tell the difference.

I'd appreciate any clarification and thoughts about all of this.  Presumably, Bob Ludwig knows what he's talking about.

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

16 Feb 2015

Bigsby wrote:Just trying to wrap my head around this.

Since digital audio is just a series of samples converted to an analog audio wave, it would seem to logically follow that the resultant wave is recreated via interpolation (i.e. smooth curves between the samples, not straight lines or stair steps like in the video above).  It would further follow that distortion of the original analog audio wave, no matter how minute, would result from that interpolation.

I keep reading and hearing that digital audio at a sample rate above 44.1 kHz is indistinguishable to human hearing from the same audio at 44.1.  Does this mean that interpolation distortion of the analog wave only occurs above 22 kHz in the audio frequency spectrum at a 44.1 kHz sample rate?  In other words, any additional wiggling of the wave between the samples is happening so fast it's too high to hear?

In the latest TapeOp, Bob Ludwig, the famous mastering engineer, states that you can hear the difference between higher resolution audio and CD when you relax and listen to the high resolution audio for a longer period of time, then switch over to the same audio at CD quality.  Presumably you'd have to be listening to it on nice equipment.  He states that quickly switching back and forth makes it difficult or impossible for the brain to tell the difference.

I'd appreciate any clarification and thoughts about all of this.  Presumably, Bob Ludwig knows what he's talking about.
First of all, click play on that video I posted, it explains everything you ask in detail.

As for Mr. Ludwigs honorability, higher frequencies can "fall down" into the audible range, especially when played back via speakers (which is pretty much always the case with audio recordings ;) ). So theres a possibility that having some higher frequencies (higher than 22.05kHz that is) can make a record sound different. Most recordings from the "golden times" of audio don't have much content in these frequency ranges at all though.

Furthermore most playback systems (D/A etc.) will probably sound different with different sample rates, which is what people who "test" this at home might recognize. They associate "different" with "better" when one of the options has a higher number (frequency) and hear "better" instead of just "different".

At any instance its NOT a quality difference or "higher resolution" (again, look at that video), the concept of "resolution" is completely misleading in this regard. There is no "pixels". And these "minute changes" in between the samples you talk about are simply frequencies above nyquist as all audio can be described as just a combination of sine waves and as was said, they don't get recorded. Thats the same effect a microphone with a frequency range to 20kHz has though :)

Edit: And about your question concerning "interpolation", its actually just a filter (like in low pass filter). As any "changes" between the samples would be above nyquist a simple low pass filter at nyquist will yield the intended smooth wave.

User avatar
ScuzzyEye
Moderator
Posts: 1402
Joined: 15 Jan 2015
Contact:

17 Feb 2015

I'd really like to see people who make claims about being able to tell the difference between two audio samples, conduct a proper double blind test. It's so easy for the mind to tell you you're hearing something when you're not. There are tons of stores about mix engineers (people who have well trained ears) making subtle tweaks to an EQ to get just the right sound, and then realizing the EQ was not in the signal path.

Though I did come across someone who could pass an ABX test with down-sampled audio over and over. It wasn't under extended listening. It was actually very short listening. His technique was to listen for one little part of the audio that sounded different. It'd either be there or not on the down-sample. The thing is, the original files were known to have quite a bit of ultra-sonic content. If I had to guess, he was likely picking up on an artifact created by the low-pass filter that was used to prepare the lower sample rate audio.

It's impossible to make a perfect filter. If you have a lot of content, at and just above Nyquist, and it all isn't removed before down-sampling, there will be aliasing artifacts pushed into the audible range.

electrofux
Posts: 864
Joined: 21 Jan 2015

18 Feb 2015

In that track, what REs are you exactly using? Have you checked if there is one hog like eg a complex combinator Patch. There are certain REs or even Patches which will blow up any CPU.

sleeper0013
Posts: 18
Joined: 20 Feb 2015

20 Feb 2015

Creative Soundblaster ZxR 3d PCIe Sound processor. this gave me a 30% DSP increase on an machine with an AMD 9590FX 8core 4.7gh processor. Mind you this is an Internal card meant for a tower and not a laptop.

Post Reply
  • Information
  • Who is online

    Users browsing this forum: No registered users and 32 guests