Bitte geben Sie einen Grund für die Verwarnung an
Der Grund erscheint unter dem Beitrag.Bei einer weiteren Verwarnung wird das Mitglied automatisch gesperrt.
Orgelklang - technisch und philosophisch
DigitalPipes
(
gelöscht
)
#16 RE:Orgelklang - technisch und philosophisch
Oh! Something to talk about.
A difference in sound quality is better perceived with a higher bit depth than with a higher sample rate. I remember well when we carried out double blind tests at uni during the semester of audio engineering. That is, a “regular” listener can better perceive a difference between 16 and 24 bits than from 44.1 to 48 KHz. That is because, as you correctly stated, the higher amount of decimals describing a sound wave allows a converter to reproduce the sound more faithfully with a greater degree of approximation. Now, a sampling rate higher than 48Khz, that detects whatever is found over the frequency of 24.000 Hz (nothing of the organ, and nothing of the environment where it sounds) adds an amount of data which is good, sorry to say, for nothing. It is true that the higher frequencies influence the lower ones, but often in a bad way, with intermodulation distortion. Let alone the type of DAC that should be used in this case, that must be perfectly linear otherwise the distortion created would be unbelievably nasty. I have an amount of high res devices at home and when I run the audio files to test the linearity, only RME can kind of stand the test. For all the others, it’s a catastrophe. So, we should have a high res sample, use super expensive linear DACs, and then? Oh, we need Berillium tweeters to listen to what is not there. And I can see organ manufacturers mounting on their organs super DSP to handle the data, super DACs for the conversion, and plasma tweeters. Ouch.
When I say that sampling is fertig I mean that, in the application to achieve a credible organ sound, this technology hit the ceiling. The best “sampled” device on the market is Hauptwerk because the user can decide the processing power, the quality of the converters, the quality of the speakers. And that makes a huge difference compared to any ready to use solution one can buy in a shop. And still, with the best equipment, the sound image produced looks like a puzzle. It does not matter from how far you look (listen) at it, the cuts are always visible. So, instead of reconstructing a sound image, piece by piece, why not constructing it from scratch? Isn’t it what AI is trying to do now? How about the deepfake? That is thousand times more credible than a puzzle. Physical modeling is the answer. Well, the evolution of physical modeling.
Now, Viscount. I love their technology, I hate their approach. Physical modeling (although it is not real physical modeling) and mono reverberation. Great, standing ovation! The problem with the mixtures is not just the headroom, it is the algorithm. Most of them sound crap, few of them sound very good. So, there is already a way to make them sound good. Yes, let’s wait to see what they come up with. I am not holding my breath though.
For those interested to know more about hi resolution audio, the best video out there:
Samplerates: the higher the better, right?
Michal
(
gelöscht
)
#18 RE:Orgelklang - technisch und philosophisch
I, well my company, once worked for Adobe (and also the Fraunhofer Institute) and what I said in my post was what I remembered.
Now after Your reply, I searched for an exact quotation. Here it is (source: Adobe):
"Sample rates were first discussed in the 1940s, as part of the Nyquist–Shannon theorem. This states that any sampling rate must have twice the frequency of the original recording, otherwise the sound is not faithfully reproduced.
The human ear can hear between 20 hertz (20Hz) and 20 kilohertz (20kHz). 44.1kHz is more than twice the top range of human hearing, so will provide a very accurate reproduction according to the theory.
Some people still record in higher sample rates to capture all sounds. While we can’t hear these in the original recordings, if an audio sample recorded at 192kHz was pitched down, some hitherto inaudible frequencies would become audible. If it was recorded at a lower sample rate and then pitched down, some of the highs in that audio would be lost.
However, even if you’re recording at a higher rate, it will be likely converted back to 44.1kHz – the rate to which many modern audio systems are set.
What other sample rates are used – and what for?
With 44.1kHz the standard for CD audio, you might wonder why other sample rates exist. As we’ve mentioned, higher sampling rates can provide clearer audio with no white noise. They can also be useful for mastering and mixing audio. Even if we can’t hear some sounds, they do exist in the higher sampling rate recording and so can still be manipulated.
This sample rate is also used as a standard rate alongside 44.1kHz. Do check though, as audio recorded in one rate and played at another will be either sped up or slowed down.
88.2kHz.
This is now the gold standard for hi-res recordings. Using this sample rate produces less distortion (called ‘aliasing’) when converting from analogue to digital, and allows greater freedom when mixing and mastering.
96kHz.
Similar to 88.2kHz, this sample rate provides more options when mixing and mastering the audio. But working at these higher rates could be an issue if your computer can’t handle the added information and storage needed.
192kHz.
Some reports have suggested that recording at such a high sample rate can produce issues in your audio, such as jittering. It’s also hard to find computers that can handle it. Really, it’s only useful for slowing down high-frequency audio.
And by the way. My ears never were very good at high frequencies. But I can tell Plasma Speakers from conventional speakers! A friend of mine owned Acapella loudspeakers together with the Acapella ION Plasma tweeters. I can very well remember the ozone odor and the silky and precise high tones that even helped to distinguish much lower piano tones from different piano manufacturers, which themselves (the pianos, nor my ears) do not come anywhere near these frequencies. Because of the ozone odor and because of the frightening high voltage used, I went with the IRS Betas instead of the Acapella speakers, but even when the IRS were great in bass and many other things, the sound never was so precise than with the Acapellas).
Hey Paolo, please don't get me wrong - I mostly agree with you and also that modeling is the future, but I just like to discuss - for that reason alone it's boring to always agree :-)
Bevor ich Euch jetzt noch ein paar Zahlen, Daten oder mathematische Modelle um die Ohren haue - hier eine kleine Geschichte
Malcolm Forbes selig: „Der Unterschied zwischen Jungs und Männern liegt im Preis ihrer Spielzeuge.“
in dem OrgelQuiz (nicht nur) für Konfirmanden ist das Bild einer optischen Täuschung zu sehen mit der Frage ob unsere Ohren sich auch so leicht täuschen lassen?
unser Gehirn ist definitiv prädestiniert dafür sich überall und ständig zu täuschen und täuschen zu lassen...
schaut doch mal rein
ich bin gespannt und freue mich jetzt schon Eure Antworten - OrgelQuiz online -
. . . Oliver . . .
Hallo Oliver,
vielen Dank für den netten "Dreh" des Threads!
Und ja, unsere Ohren lassen sich täuschen. Auch unser Gehör gehört zum synästhetischen Komplex.
Und wer mal mit nicht orgeltechnik-affinen Menschen Hörproben gemacht hat, kann erstaunliche Erkenntnisse über Authentizität erzielen.
Zudem ist unser Hören von unserem psychischen Zustand abhängig und vice versa.
Liebe Grüße,
Wolfgang
DigitalPipes
(
gelöscht
)
#21 RE:Orgelklang - technisch und philosophisch
Yes @Oliva di Gloria and yes @Brabanter , that's true, we are in the domain of bias and psychoacoustic here. However, there are clear reasons why our brain processes things in a way that might look objectively misleading, will come to it later in this post.
So, coming back to sampling and the interesting debate with @Michal (and the discussion is now between two audiophiles, a type of human being that is far worse than the organist)
Over 24 Khz there is no information relevant to organ music necessary to be sampled. However, let's suppose that we are recording an organ concert and a secret terrorist is shooting a disturbing ultrasonic sound in the middle of the public for their own evil enjoyment. We need the proof that such a disturbance happened. So we decide to sample at 192Khz (hoping that the frequency shot was not higher than 96 Khz, otherwise we need to sample at 384, 768, and so on and we need a truck to host the computer). We manage to catch the nasty sound and we should seriously consider working in forensics! . Now we want to show in Court, that there was a terrorist at the concert, so as to allow the judge to convict them for Sacred Criminal Offence. We bring our superspeakers (plasma tweeter, of course, to be sure we can cover the frequency range at least till 150Khz) in the room and show the evidence. Dammit! Nobody can hear it! Let alone the judge who is in their 70s. No way, all this work for nothing, the terrorist walks free.
Now, the aliasing issue. If there is a frequency higher than the treshold set by the Nyquist theorem for the chosen sample rate, let's say we are sampling at 48 KHz, so we correctly detect frequencies up to 24 Khz, and there is something somewhere around 28Khz, we will have two (or more if higher) samples in the positive or negative region of the wave, and this will be interpreted and wrongly codified as a much lower frequency. But this is a very old problem, and ADC are equipped with very effective anti-aliasing filters that exclude whatever is found in that region.
And finally, the comparison with audible frequencies between different speakers. Here the test of whether there is really a perceivable difference between normal and hi-res should be carried out with linear speakers. Otherwise we fall into the loudness trick caused by the different frequency responses of every speaker. Our ear is more sensible to some frequencies and less to other ones, i.e. we falsely perceive different amplitudes at different frequencies, even if the amplitude is costant across the whole range:
image.gif - Bild entfernt (keine Rechte)
So, when we compare speakers with different frequency responses, the speaker that hits the region where our ear is more sensitive, or provide a higher amplitude in a region where we perceive, for example "airiness", we will have the feeling that the speaker is reproducing something "better" or in a more "transparent" way than another. Or that it reproduces sonic data (ultrasonic) that another speaker cannot. The truth is that, supposed that both speakers cover the entire audible frequency range, they are reproducing the same sonic information with different amplitudes, but all the same audible information is there. Therefore, the frequency response curve of a speaker is more relevant in this case than it's ability to reproduce ultrasonic frequencies because we cannot simply hear them. The presumedly perceived ultrasonic component is therefore a confirmation bias deriving from the knowledge of being listening to such a speaker. It is a physiological fact that, in a double blind test, performed with linear speakers, with equal gain, nobody can hear any difference between audio material with or without an ultrasonic component.
It like a TV or Projector that can reproduce the infrared portion of the light spectrum. How nice! Too bad that we simply cannot see it. So, why bother? It's like enjoying the thought of a faithful wife while she is in bed with the neighbor. Happyness is a state of mind.
#22 RE:Orgelklang - technisch und philosophisch
The 96 khz vs 192khz diatribe has always been very controversial.
I think the question must be asked in a different way, and therefore also the answer.
RECORDING at 192khz and LISTENING at 192khz are two very different things.
Therefore, for technical reasons it may make sense to record at 192khz (depending on the sampling frequency with which the recording will be released) because the audio material can be subjected to post production processes (as Oliver has rightly pointed out, in case for various reasons it is necessary to halve the speed, but there are also many others in post production and mastering).
While there is still doubt about the 192kz frequency for LISTEN.
Personally (my system is a good hi fi system, quite revealing, even if not esoteric: Opera Loudspeaker, Nad electronics), I can hear differences between 96 and 192 khz, but I must also say that 192 doesn't sound better, but simply different, and this could just be the effect of a different converter filter handling the two frequencies differently.
Now, this is all about home playback of commercial stereo recordings.
But a digital organ (or Hauptwerk) is not a normal reproduction system, and the matter becomes even more complicated.
Hauptwerk, for example is a system that reproduces samples, but which also partially uses physical modeling, and can use external VST or in any case a sample processing. It can therefore be considered as a program that applies a sort of automatic post production.
So does it make sense to use high frequency samples (not necessarily 192khz)?
Personally I don't know, but it is certain that many VSTs and post production systems work better at high frequencies samples, and for reasons other than the fact that they are able to reproduce tones at 48khz (which our ear does not perceive).
In summary, I think sampling (in general) has come to excellent maturity, but I wouldn't be so sure there is nothing else to discover (or improve).
On the other hand, I think that sampling-based musical instruments (therefore digital organs and Hauptwerk), precisely because they are not "simple" audio reproduction systems, (even if very good at reproducing a pipe tone), still need research and development, to perfectly imitate a pipe organ.
And after all, even Physis still can't do it.
So, again good work to the engineers of Viscount, Johannus and Hauptwerk!
DigitalPipes
(
gelöscht
)
#23 RE:Orgelklang - technisch und philosophisch
Zitat von ahlborn im Beitrag #25
... but it is certain that many VSTs and post production systems work better at high frequencies, and for reasons other than the fact that they are able to reproduce tones at 48khz (which our ear does not perceive).
yes, and they use oversampling, not higher sample rates, to control the intermodulation distorsion.
Zitat von ahlborn im Beitrag #25
On the other hand, I think that sampling-based musical instruments (therefore digital organs and Hauptwerk), precisely because they are not "simple" audio reproduction systems, (even if very good at reproducing a pipe tone), still need research and development, to perfectly imitate a pipe organ.
And after all, even Physis still can't do it.
There is an insurmontable flaw in sample-based instruments: they reproduce the whole sonic image piece by piece. One cannot change this because of the very nature of sampling. On the other hand, physical modeling is far from being a mature technology, and works from a completely different perspective.
As mentioned earlier, there are (at least) two aspects to processing frequencies above the hearing threshold.
The more important aspect is certainly the post-processing of the sound - both in modeling and in sampling.
But there is also an acoustic reason why frequencies above the hearing threshold change the overall musical impression. In fact, many musical instruments produce ultrasound. And this is important for authentic playback when recording the instruments. Source:Ultrasonic_components_of_musical_instruments
Abstract: "According to the Nyquist Theorem, the sampling rate must be at least twice the highest analog frequency component of the signal (not the highest perceived frequency). If the components of the original signal run over the Nyquist frequency it is necessary to limit them by means of an antialiasing filter, altering the phase components of the signal at least a decade around the cutoff frequency of the filter. If the ultrasonic energy of musical instruments was significant, the use of antialiasing filters with a cutoff frequency under the Nyquist frequency would result in some kind of misrepresented audible components of the original signal."
Thus, it can be said that higher frequency resolution by means of a high sampling rate is important not only for post-processing, but also for physical sound perception.
Mental and organic effects (that are sure to exist) are not yet taken into account (there are hundreds of publications on this matter - especially for military or official applications).
DigitalPipes
(
gelöscht
)
#25 RE:Orgelklang - technisch und philosophisch
Zitat von DigitalPipes im Beitrag #26Zitat von ahlborn im Beitrag #25
... but it is certain that many VSTs and post production systems work better at high frequencies, and for reasons other than the fact that they are able to reproduce tones at 48khz (which our ear does not perceive).
yes, and they use oversampling, not higher sample rates, to control the intermodulation distorsion.
No Paolo, I am referring to a different thing. Oversampling is commonly used by VSTs, but I was referring to some studies they had done (I have to look for the source though, not right now I'm out), processing signals with VSTs, recorded at 96 and 192 khz.
I vaguely remember (I can be more precise later):
same signal, recorded with the same microphone, same preamp, two identical converters (one set to 96 and the other to 192).
The two signals processed with VST (I don't remember which).
In one case in96 -> out 96, in the second case in192 -> out 96.
Analysis of the two output files from the VST showed that the file recorded at 192 had less noise/less distortion after processing the VST.
Just to say there were differences.
But then again, give me time to look for the study in question, I can be more precise.
Zitat von ahlborn im Beitrag #29
[quote=""|p67601]
The two signals processed with VST (I don't remember which).
In one case in96 -> out 96, in the second case in192 -> out 96.
Analysis of the two output files from the VST showed that the file recorded at 192 had less noise/less distortion after processing the VST.
Just to say there were differences.
But then again, give me time to look for the study in question, I can be more precise.
There is some truth in that, but the partial improvements are achieved with an over usage of processing resources, while better results can be achieved with lower sample rates and oversampling.
Zitat von Gemshorn im Beitrag #30
Der Laie staunt beim Lesen...
Habe ich richtig verstanden, dass es unhörbar hohe Frequenzen einen sehr wohl hörbaren Effekt auf den Klang, wie er vom Menschen aufgenommen wird, haben können?
The answer:
https://people.xiph.org/~xiphmont/demo/neil-young.html
Michal
(
gelöscht
)
#30 RE:Orgelklang - technisch und philosophisch
@ Gemshorn und alle, die keine Bits zum Frühstück gegessen haben (schmecken ja auch nicht wirlich).
Nun, einfach gesagt geht es darum, dass Paolo meint, dass die Sampling-Technologie am Ende ihrer Möglichkeiten ist.
Ich widerspreche dem weil ich meine, dass die Qualität der Samples und deren Nachverarbeitung noch nicht technisch ausgereizt sind.
Es gibt bei der Aufnahme der Tonschnipsel zwei wichtige Parameter. Die Bittiefe (macht die Dynamik aus) und die Samplingrate (macht den Frequenzumfang und die Auflösung aus). Bei der Bittiefe gibt es offenbsichtlich keine Meinungsverschiedenheiten. Aber bei der Samplingrate.
Da gesampelte Instrumente Unmengen von sofort abrufbaren Daten produzieren (die in den RAM-Speicher des Rechners müssen) und die heutige Heimsysteme (sowohl Computer, als auch Orgeln) diese nur begrenzt mit der nötigen geringen Latenz, also in Folge nicht mit beliebig hoher Bittiefe und Samplingrate zur Verfügung stellen können, gibt es rein technisch betrachtet hier noch viel Luft nach oben.
Somit ist meiner Meinung die Sampling-Technologie noch lange nicht am Ende.
Paolo hält dagegen, dass das alles ja gar nicht mehr relevant ist, weil höhere Samplingraten einen Frequenzbereich eröffnen, den man gar nicht mehr hören kann.
Ich sage (und da gibt es wohl einen gewissen Konsens), dass zumindest bei der Nachbearbeitung (und diese erfolgt zwingend bei Physis, wie auch bei Sampling-Orgeln alleine schon durch Intonation, Hall, jegliche Veränderungen der Einstellungen und auch der Lautstärke) eine höhere Auflösung gut und wichtig ist weil man dann besser rechnen kann und die errechneten Resultate genauer sind (beim digitalen Rechnen fällt - nicht ganz richtig, aber der Verständlichkeit wegen einfach ausgedrückt - immer Auflösung weg, die dann wieder - aber nur annäherungsweise - wiederhergestellt wird).
Bezweifelt wird hingegen, dass die durch die höhere Auflösung möglich gewordene Konservierung von Frequenzen oberhalb der Hörschwelle (richtig wäre hier der Ausdruck "oberhalb des Hörvermögens"), also Töne die man nicht hören kann (Ultraschall, wobei der Ausdruck eigentlich für Frequenzen ganz weit oberhalb unseres Hörvermögens benutzt wird, ich aber von Frequenzen spreche, die vielleicht bis maximal zum doppelten unseres Hörvermögens liegen), der Gesamtklangeindruck positiv beeinflusst und authentischer zum Original wird.
Fakt ist, dass Musikinstrumente auch Schall produzieren, den man nicht mehr hören kann. Fehlt dieser in der "Konserve" oder im modellierten "Kunstton", dann leidet auch die Authentizität der Reproduktion gegenüber dem Original. Das liegt daran, dass die Energie dieser nicht mehr bewusst (und das "bewusst" ist hier ein wichtiges Wort) gehörten Töne dennoch die Energie des Gesamtklangs beeinflussen. Gerade die höchst komplexen Einschwingvorgänge und die hohen Töne dienen zur Unterscheidung von Tonqualitäten.
Ich behaupte, dass Physis 1 in der Rechnerkapazität, aber auch bei den Algorithmen, aktuell zu beschränkt ist diese Bereiche abbilden zu können (daher die Mixturen, die - und da sind wir uns sicher einig - verbesserungswürdig sind).
Sampling-Technologie hingegen kann einfach "nach oben" erweitert werden wenn mehr Speicher zur Verfügung gestellt wird (bei Hauptwerk kann man diese Anforderungen ganz zu seinem Rechner passen einstellen). Physis braucht hingegen eine neue Rechnerplattform, die an sich leistungsfähiger ist.
Aber alles das ist eine sehr theoretische Diskussion, die nicht zuletzt der Diskussion selbst wegen geführt wird.
Denn, jedes bestehende Physis-System kann mit zusätzlichen DSPs Plug+Play aufgerüstet werden und auch Paolo ist der festen Meinung, dass das alleine schon den Klang positiv beeinflusst.
Ich erwarte halt von Physis Plus (also wenn wir uns eine neue Orgel kaufenn sollen) noch etwas mehr. Ich finde das ist legitim.
Und wenn es um Frequenzen geht, die oberhalb des Hörvermögens noch Einfluss haben sollen (ich meine hier nicht die höhere Abtastrate zwecks besseren Rechnens bei der Nachbearbeitung, sondern tatsächlich wiederzugebende Frequenzen), dann müssen die auch Wiedergegeben werden können. Und das setzt eine entsprechende Ausstattung bei der Abstrahlung voraus. Das ist teuer.
Somit kann man sich eigentlich die ganze Diskussion sparen weil einfach bessere Lautsprecher an die Orgel (oder Hauptwerk) gehängt bringen um Welten bessere Ergebnisse, als eine Auseinandersetzung über den Ultraschallbereich. Ich habe es vor langer Zeit schon mal gesagt, dass die damalige (80er/90er Jahre) Diskussion über die Unterschiede von CD-Playern unsinnig ist, wenn man nicht die entsprechende Audiokette hat, diese Wiederzugeben. Kurz: ein besserer Lautsprecher bringt viel mehr als ein besserer CD-Player. Das Dumme daran ist, dass der bessere Lautsprecher viel, viel mehr Geld kostet als der bessere CD-Player.
Ich weiß, das kann manch einer hier nicht mehr hören, aber das ist die Wahrheit.
Aber warum dann die ganze obige Diskussion?
Weil ich einfach der Meinung bin, dass auch wenn ich selbst das Modeling als zukunftsweisend betrachte, man Sampling-Orgeln klanglich nicht einfach abtun darf. Ich glaube - anders als Paolo - nicht, dass Sampling am Ende ist. Und ganz ehrlich, ich glaube nicht einmal, dass Analog wirklich jemals am Ende war (aber teurer als Digital und das war das wahre Ende von Analog).
Jetzt anmelden!
Jetzt registrieren!