Even uncompressed audio cuts out frequencies. With digital audio capture it is impossible to capture everything. There will always be a floor and a ceiling. In the case of flac it’s typically 20-24hkz.
Audiophiles have moved onto “high res lossless” because regular lossless wasn’t good enough for them.
The “high res lossless” you’re referring to, is still FLAC. FLAC has no downside. Whatever PCM audio you want, it can represent perfectly, while using less storage.
FLAC doesn’t “limit” or “cut out” anything unless you or the software you’re using is reducing the bit depth or samplerate of the source PCM waveform.
Which is something you might want to do, since it will reduce file size significantly to not use a higher samplerate than necessary. But FLAC itself doesn’t do or require that.
On new formats, you might be thinking of MQA, which supposedly encodes the contents of a higher samplerate PCM waveform into a lower samplerate file, but it has been proven to be largely snake oil, and lossy as hell in terms of bit integrity.
And this is because audiophiles don’t understand why the audio master is 96 kHz or more often 192 kHz. You can actually easily hear the difference between 48, 96 and 192 kHz signals, but not in the way people usually think, and not after the audio has been recorded – because the main difference is latency when recording and editing. Digital sound processing works in terms of samples, and a certain amount of them have to be buffered to be able to transform the signal between time and frequency. The higher the sample rate, the shorter the buffer, and if there’s one thing humans are good at hearing (relatively speaking) it’s latency.
Digital instruments start being usable after 96 kHz as the latency with 256 samples buffered gets short enough that there’s no distracting delay from key press to sound. 192 gives you more room to add effects and such to make the pipeline longer. Higher sample rate also makes changing frequencies, like bringing the pitch down, simpler as there’s more to work with.
But after the editing is done, there’s absolutely no reason to not cut the published recording to 48 or 44.1 kHz. Human ears can’t hear the difference, and whatever equipment you’re using will probably refuse to play anything higher than 25 kHz anyways, as e.g. the speaker coils aren’t designed to let higher frequency signals through. It’s not like visual information where equipment still can’t match the dynamic range of the eye, and we’re just starting to get to a pixel density where we can no longer see a difference between DPIs.
No its not lol
Even uncompressed audio cuts out frequencies. With digital audio capture it is impossible to capture everything. There will always be a floor and a ceiling. In the case of flac it’s typically 20-24hkz.
Audiophiles have moved onto “high res lossless” because regular lossless wasn’t good enough for them.
The “high res lossless” you’re referring to, is still FLAC. FLAC has no downside. Whatever PCM audio you want, it can represent perfectly, while using less storage.
FLAC doesn’t “limit” or “cut out” anything unless you or the software you’re using is reducing the bit depth or samplerate of the source PCM waveform.
Which is something you might want to do, since it will reduce file size significantly to not use a higher samplerate than necessary. But FLAC itself doesn’t do or require that.
On new formats, you might be thinking of MQA, which supposedly encodes the contents of a higher samplerate PCM waveform into a lower samplerate file, but it has been proven to be largely snake oil, and lossy as hell in terms of bit integrity.
And this is because audiophiles don’t understand why the audio master is 96 kHz or more often 192 kHz. You can actually easily hear the difference between 48, 96 and 192 kHz signals, but not in the way people usually think, and not after the audio has been recorded – because the main difference is latency when recording and editing. Digital sound processing works in terms of samples, and a certain amount of them have to be buffered to be able to transform the signal between time and frequency. The higher the sample rate, the shorter the buffer, and if there’s one thing humans are good at hearing (relatively speaking) it’s latency.
Digital instruments start being usable after 96 kHz as the latency with 256 samples buffered gets short enough that there’s no distracting delay from key press to sound. 192 gives you more room to add effects and such to make the pipeline longer. Higher sample rate also makes changing frequencies, like bringing the pitch down, simpler as there’s more to work with.
But after the editing is done, there’s absolutely no reason to not cut the published recording to 48 or 44.1 kHz. Human ears can’t hear the difference, and whatever equipment you’re using will probably refuse to play anything higher than 25 kHz anyways, as e.g. the speaker coils aren’t designed to let higher frequency signals through. It’s not like visual information where equipment still can’t match the dynamic range of the eye, and we’re just starting to get to a pixel density where we can no longer see a difference between DPIs.
If that’s happening you need to fix your transcoder settings.