guitfnky wrote: ↑
28 Nov 2021
it’s not apples and oranges—they’re both DAWs. it’s perfectly reasonable to compare them.
Imagine, for a moment, that instead of spending 2 years developing a device, I spend 1 year developing a solid working version of it, and then produce weekly updates for another year.
It's still 2 years worth of development, but in one case you have 1 release versus 53 from the other.
Does that mean you're getting more value for your money? No, it does not.
And if you accumulate the features, it will seem like the second approach delivered more features. But it didn't. It's just that in version 1.0.7, that new slider feature simply meant that the knob feature that was introduced in version 1.0.5 (that was replaced with the slider) counts as a feature that is no longer in the product after 2 years, yet still counts as a feature.
And what about improvements in behaviour.
In the first case you could just have, "feature X allows you to drag with the mouse".
In the second case you get:
1. v 1.1.8 - Feature X allows dragging
2. v 1.2.7 - Feature X displays an outline while dragging (which wasn't mentioned in the first place, as it's just how it worked and didn't require a distinction)
3. v 1.3.9 - Improved dragging to interpolate window movements to reduce the "stutter effect" (also not mentioned, because that's just how it worked in the final version)
That's just one example of why you can't evaluate productivity based on releases alone. In your mind, it's easy to think of each release as a single unit of work. But it is not like that.
Just look at the build version numbers of Reason. That's how many different versions of Reason has existed internally at RS. You just never saw those versions released because it was all iterated internally.
And what about R&D? Do you know how much R&D either performs to improve factors that won't appear on feature lists, but will shape the feature lists? Suppose each feature RS delivered involved comparing different implementations of the feature with a focus group to evaluate which one was the most intuitive to work with? At the end of it, you still have one finished product, but it required the work to produce anywhere between 2 and 10 products to deliver that one final product.
Now, when it comes to evaluating the speed of development by features, again, you just cannot compare the two as they, despite both being DAWs, develop COMPLETELY different products.
The "fluffy stuff" is still "stuff." It still takes time to produce. And that's why it has to be considered in the conversation of how productive they are, because that is what they're producing.
Think of it this way, if RS did nothing but develop REs (and other fluffy stuff), by ignoring it, you're effectively saying that they've done absolutely nothing. That doesn't make sense, right?
Developer productivity is tricky enough among peers on the same team, let alone between completely different products with vastly different feature sets.
Note: I am not in any way defending with or siding with RS.
What I'm saying is just a matter of fact. You might not understand why, but that's because you've not really sat down and thought very deeply about how to assess programmer productivity.
Maybe RS could utilize some newer/better workflows. But for all you know, they could be working at the most effective rate of production possible. The fact they've not implemented important features or fixes is no indication of their actual rate of productivity. It just means they've neglected those features and fixes.
Take Friktion as an example. How long would it take to develop such a thing? What about Grain or Mimic? Well, first you'd have to develop all of the other prior technologies they have (pitch shifting, beat detection, etc.).
So, even if Reaper was better than Reason, or if the shoe was on the other foot, it's still an apples and oranges comparison for the simple fact that one does lots of "fluffy stuff" and the other doesn't, but instead focuses on something else. And for that reason, there's no legitimate way to convert any metric of productivity between them.
People have tried it in the past:
1. Lines of code written
2. Months spent on development
3. Number of features delivered
4. Sum of feature effort estimations
5. Number of tickets resolved
And in the last 40-50 years or so, all have failed as a measure for productivity.
Truth is, we don't, and might never have a single metric for developer productivity.