Pono Update


Neil Young’s Pono has come a long way since we previewed it last summer. The key thing: It launched. Originally scheduled to roll out in October 2014, Pono finally debuted in early January 2015, shipping pre-ordered players to its Kickstarter supporters and opening its music store the same day. The milestone was heralded by Young at a press conference during this year’s Consumer Electronics Show. Curious listeners can now buy their own Toblerone-shaped Pono player for $399.

The Pono player will also reportedly add DSD playback via a forthcoming software update. At present, the player supports the FLAC, ALAC, WAV, and AIFF formats.


Old Ben's picture

This is an excellent article - thank you! However, it also identifies the problem - there are too many disparate components that need to be brought together (at a pretty significant price tag) to reap the benefits of HRA. I don't think many people will take the financial leap to HRA without hearing the benefit in some kind of an A/B test against what they are listening to know. The problem is how to go about doing that when you need so many different pieces of hardware and software working together. This seems like a job for the CEA to spearhead in cooperation with a store like Best Buy.

What I would envision is that the CEA would work with the software and hardware developers to cooperatively offer their products for use HRA demonstration kiosks in the store. Consumers would be invited to bring in their current audio equipment (e.g., their iPod and headphones) and use them to compare against one or more HRA setups. To make the comparison a true A/B comparison, the consumer would listen to the same song(s) in their current format and in an HRA format. What would probably be needed is a limited selection of HRA music tracks at the kiosk and the ability to provide the consumer with a free download of the same song in their current format (e.g., through a download code). Of course, the store would include all of the different HRA components for easy purchase (bundles?) once the consumer is sufficiently wowed and ready to buy.

sfdoddsy's picture

Nice article, but it doesn't really address the question posed in the first paragraph.

Hi-res music has been available for many years via DVD-audio and SACD. They've been failures. As have been HR downloads.

The problem is simple. Everyone can see the difference between HR video and LR vid o. Very few can hear the difference between HR audio and LR audio.

I can't, and I spent thousands trying.

FL Guy's picture

Ludicrous pricing.

Music has value - definitely. Well mastered, high res more value, sure. But whoever is pricing albums at $50 - $60 (as some of the current distributors do) is simply hallucinating imho.

As I remember CDs didn't take off until prices widely dropped below $20. I'm not saying $20 is the right price point for hi-res today. But pricing clearly plays a role in adoption (or lack of adoption).

dommyluc's picture


And I agree with sfdoddsy. Some of the sonic difference is so slight that many people are just not impressed. When I watch the Blu-ray of "Prometheus" and compare it to the DVD version, or even the HD version on my satellite receiver, the difference is impressive. But many SACDs and hi-res downloads just don't have the "WOW" factor like when you compare HD video to SD video.
And I still say that a well-recorded and well-mastered CD played using Dolby Pro-logic IIx Music mode through a decent surround system can sound extraordinary. Right now, I am bathing in the sonic luxury of the 2009 digital stereo remasters of the Beatles CD boxed set (sorry, traditionalists, but I already had the clickety-poppity vinyl versions when they first came out when I was a lad during the '60s, and they can take the mono versions and send them to jail with Phil Spector), ripped to the PC in WAV and streamed to my AV receiver. It's the best thing since sex, weed, and chocolate combined. As Bender of "Futurama" would say, "Fun on a bun!"

kvothe's picture

This is a well written article full of useful information, but it's also constantly on the verge of lying by omission.

I wish the article mentioned the cold, hard truth: Hi-Res audio like 192/24 in and of itself provides no benefit for playback over CD quality. And it can be inferior! Ouch. More on that below.

There are many reasons that many CD's don't sound very good, but the format has nothing to do with it. The article dances around that fact a bit by saying things like "when listening to compressed streams or badly mastered CDs". Yeah, yeah, you covered yourself with that careful language. But come on. The article gives the distinct impression that HRA will improve your listening experience because of the fidelity of the format, which is untrue. Leaving it unsaid leaves a taste like snake oil.

High resolution audio during capture and mastering are necessary however. An analogy might be found in the advantages of shooting and editing photographs in RAW, even though you'll post them as lossy JPGs to friends and family.

Hold on for a deep dive and read this excellent article:

Brief highlights:
- "Unfortunately, there is no point to distributing music in 24-bit/192kHz format. Its playback fidelity is slightly inferior to 16/44.1 or 16/48, and it takes up 6 times the space."
- "There are a few real problems with the audio quality and 'experience' of digitally distributed music today. 24/192 solves none of them. While everyone fixates on 24/192 as a magic bullet, we're not going to see any actual improvement."
- Particularly see the whole "Listening Tests" section that discusses how it's been shown time and again that people are unable to distinguish higher res audio from that of CD quality, including industry professionals.

Yes, there are reasons to grab an HRA audio file. IMHO the main reason is if it's the only digital means to gain access to a better quality (or merely different) master that's not available on a CD release (e.g. the master used for a vinyl release may have had better dynamic range than what resulted on a CD release for the same album). But you'd go this route to gain access to a better master, not a better digital playback format.

Let's talk about points like that, rather than dance around them! It moves the industry forward, and creates more informed consumers.

This is a fun hobby, and eking out every last bit of quality and fidelity is an enjoyable pursuit - even when it won't affect the quality of the audio (attractive components, knobs with a good feel, etc.) So in that regard, knock yourself out with HRA. Have fun. But most people would be far better off improving other aspects of their listening experience first, and it feels like a disservice not to point this out.

That said, certain parts of this article are very useful to almost anyone. I'm going to use the info in this article to finally get a good DAC. I particularly enjoyed that section.

But I think the industry and community are best served by unvarnished truths, even if it's just a "tip of the hat" mention.

This probably comes out more negative in written form than I intended. It's an interesting topic to discuss. Thanks again for the article.

vqworks's picture

I actually appreciate this article. In defense of the cost involved in acquiring both the software and hardware, when has audiophile sound ever been affordable? In the days of analog-only sound systems the cost of software and hardware was, in fact, more expensive overall when adjusted for inflation. Doesn't anyone remember the cost of audio equipment in the 60s through 80s? Both the past and present are the same in that the audiophile market has always been distinctly different from the mainstream consumer audio market.

That said, HRA is for the audiophile market, a niche market. It will never fly with regular consumers. The biggest reason, as was the case with SACD and DVD-Audio (or even the open-reel and high-end vinyl of yesteryear), is cost. No matter how high the sound quality is (real or imagined), it's a no-go for masses. This is especially true today when the entertainment industry consists of many more market fragments that are somewhat different but overlap each other and compete for consumer dollars. The last thing on most consumers' minds are HRA. The first are probably iTunes and Netflix downloads. Yes, they'd rather choose 69 cent taco meat over filet mignon but that's reality. At the same time, the HRA devotees who are paying for the software may not necessarily always get what they pay for when there are still files re-sampled from files originating from the CD sampling rate. This isn't disclosed clearly on the websites.

This is the second time I've seen the link, http://xiph.org/~xiphmont/demo/neil-young.html. I can only partially agree with what the author (not clearly identified) states. His illustration and description of the cochlea is widely used to support the claim that human hearing ranges from 20Hz to 20kHz. As we age, the upper limit gradually decreases because our hearing nerve hairs stiffen. But some people openly question that hearing range. First of all, the late David Blackmer, the founder of dbx and Earthworks described the human auditory mechanism differently. Second, the hearing tests conducted to arrive at the 20Hz to 20kHz range consisted of sine wave test tones. Complex tones are different since such tones interact and intermodulate with each other. It's widely accepted that anyone in his or her 30s is lucky to hear test tones up to 15kHz. But if anyone remembers the old Stereo Review Test Record that was made from '79 through the early 80s one track used to test the cartridge's tracking performance consisted of a two-tone (two sine waves) signal, 16kHz and 16.3kHz. Even if a 70-year old can't hear these two tones independently, he or she could hear the 300 Hz difference tone. The average 70-year old's hearing tops out at 10kHz for sine waves. My point is that we may not hear the sine waves over 20kHz but we can definitely hear the modulation or beating (or interaction) between the ultra-sonics and the fundamentals (audible frequencies). I haven't even mentioned impulse response.

Yet, most of the audio community, like the author at xiph.org, is adament about insisting on the 20Hz to 20kHz limits of human hearing. The author goes on to state that higher sampling rates really lead to the unavoidable ills of higher distortion and noise as if it's written in stone. The fact is that there are plenty of audiophile amplifiers and speakers that can complete the signal chain accurately enough to keep distortion and noise inaudibly low but allow the listener to at least have the potential to hear the effect that overtones have on their music. Doesn't anyone remember super tweeters made in the 70s that could reproduce highs over 30kHz?

My take is that HRA is a good thing. As long there are enough audiophiles to sustain it by purchasing the software and hardware AND as long as the original master and the rest of the signal chain is well chosen and set up, the potential is definitely there. The audible difference is debatable but it's there. Let's not relentlessly bash something that at least has potential.

Ethyl's Fred's picture

The 300 Hz tone that might have been heard from the Stereo Review record are due to intermodulation distortion of the playback system (most likely dominated by the non-linearities of the cartridge itself), not due to intermodulation of the tones themselves. If you had two completely separate sound systems (including separate speakers) and one played a 16 KHz tone and the other played a 16.3 KHz tone, you would not hear a 300 Hz tone. This is actually a reason why you don't want ultrasonic signals in the audio signal. It just gives more opportunity for improper artifacts to be introduced in the audible region while not contributing an audible signal in the first place.

Slartibart's picture

The beat frequency effect does not have to originate from a single source. Your comment about filtering out supersonic (hypersonic?) signals is correct.

Ethyl's Fred's picture

The beat frequency is not a separate tone. It is the fading in and out of the sound due to destructive versus constructive interference (see wikipedia under "Beat frequency"). Intermodulation is the creation of a new tone due to non-linearities. If you cannot hear above 15 kHz (for example), you will not hear a 300 Hz tone when a 16 kHz and a 16.3 kHz are playing simultaneously with no nonlinearities.

vqworks's picture

Unfortunately, I misused the term intermodulation. In electronics, intermodulation does refer to a form of distortion. But I was originally really referring to beat frequencies.

The 300 Hz difference tone (beat frequency) that resulted from the Stereo Review LP test track was not due to cartridge non-linearities or distortion at all. Ethyl's Fred was correct in pointing out that the audible tone was alternately fading in and out. This was because the two slightly different 16kHz and 16.3kHz sine wave tones were periodically re-enforcing and weakening each other. If the cartridge distortion was inaudibly low, the fading beat frequency (300Hz) would sound clean; if the distortion was audible, the beat frequency would sound at least somewhat fuzzy. The fuzziness would reveal the distortion; not the fading tone itself.

If anyone is still insisting that cartridge distortion or even electronic distortion is producing this tone, let's skip audio equipment altogether and try an experiment with two tuning forks. Some of you may remember a high school Physics experiment using two tuning forks (I do). The only dissimilarity between the tuning forks and the 16kHz/16.3kHz tones I mentioned is that each tuning fork produced an audible tone if you struck them. But the comparison between the tuning forks and music that includes ultrasonic frequencies still works because each tuning fork produced a different tone. The idea is to demonstrate that if you strike one tuning fork alone you just hear a steady tone but if you strike both of them you'll hear the beat frequency or rapidly fading in and out or "ringing" of the destructive and constructive interference (or interaction) that Ethyl's Fred referred to.

Back to HRA. If you know for sure that the entire signal chain of your system at home can produce a very wide frequency response (quite a few amplifiers and speakers can do it well) cleanly, you have HRA hardware and software, and you know the original source came from a higher-than-CD sampling and bit rate, try doing an A-B comparison between a short section of a high-resolution (96kHz, 16bit will do) PCM file with the same section of a down-converted version that was converted using Audacity (or some other audio editing software) to a 16bit/44.1kHz. The only requirement is that you listen to music of natural instruments that are loaded with highs (percussion would do - triangles, bells, steel brushes, cymbal crashes, high hats, piano, etc.).

You don't need to believe what anyone claims. If you hear a difference, great. If you don't, that's okay too. At least you can put this whole issue to rest.

Ethyl's Fred's picture

and asserting that it is does not change that fact. You even acknowledge that the beat frequency is a "fading in and out" of the volume. That is not a tone. Do you understand exactly what a tone is? Read the Wikipedia articles on "beat frequency" (a volume modulation) and "intermodulation" (actual generation of a difference tone). Two 70 year olds who cannot hear 16 kHz and 16.3 kHz tuning forks individually are not going to hear anything when both tuning forks are struck simultaneously (your original claim), and certainly not a tone in the upper bass/low mid-range region (300 Hz). To assert so indicates a fundamental lack of understanding of how we perceive pitch.

You can repeat the physics experiment with a stringed instrument. For example, a guitar that is slightly out of tune gives a rapid modulation of the sound level (a "beat frequency") when attempting to play the same note on two different strings, but it does not generate a tone in the deep bass. Nor will you hear a tone in the upper bass/low midrange if you strike the two tuning forks, just a rapid modulation in the level of a very high pitched tone. If you did hear an actual tone at 300 Hz when playing the Stereo Review test record and not just a modulation of the sound intensity, it can only come from intermodulation distortion in the system, most likely from the transducers (cartridge and speakers or even the lathe that cut the record groove) since the are the least ideal components.

vqworks's picture

In the case of the tuning forks, I wasn't referring to either of them producing a 16kHz and 16.3kHz but even if I was, there would be a lot of ringing harmonics resulting from the interaction of these two instruments since there is no so thing as a natural sine wave. But, yes, of course I know what a tone is.

Don't get too emotional. My statements don't need to be in lock step with yours.

Are you implying that if you struck two tuning forks you'd actually hear the rapid modulation? Would this modulation be audible? Seems like it would from your statement.

I read an old test report on an old Revox open reel tape deck (1966) that used a bias oscillator that employing a 70kHz frequency (considered low for a bias frequency). The reviewer stated that audible beat tones were detected between 18kHz and 20kHz. Is this modulation the result of distortion in the electronics?
Forgetting about that 70kHz bias frequency for a moment, it's been documented that natural instruments produce overtones over the commonly accepted human hearing limit of 20kHz. Most of these overtones are much lower than 70kHz. Don't you think you'd detect some kind of interaction at least in the audible range?

How would you respond to these findings:



Don't you think you owe it to yourself to at least do some experimentation or would you just say that the late David Blackmer and others were just trying to push snake oil? Notice how Blackmer describes the inner ear in the 8th and 9th paragraph.

By the way Blackmer, like a lot of other knowledgeable engineers, was a member of the Audio Engineering Society.

Ethyl's Fred's picture

If the frequencies of the sources are audible then a person would of course hear the modulation of the sound level as the source signals interfere (the "beat frequency"). But if they cannot hear the pitches in the first place there is no sound to modulate. The ultrasonic hearing phenomenon referred to in your links is the result of bone conduction when a transducer is placed directly on the head. It does not occur in normal listening. In fact, a recent masters thesis on a study of that very effect (http://publications.lib.chalmers.se/records/fulltext/147171.pdf) explicitly states "The impedance-matching function of the middle ear, necessary to transmit sounds from the outer to the inner ear, is known to be unable to handle ultrasonic frequencies (Pumphrey, 1950)." Most would agree that a researcher that investigates the very phenomenon that is being claimed (and gives citations to the appropriate scientific literature) has more credibility than someone who is trying to sell high-resolution audio equipment.

I have no idea what a reviewer back in 1966 might have been hearing.

And I have run the experiment. I'm 54 years old now and don't hear a thing above 14 KHz. When applying a low pass filter to music or noise I can't reliably tell which is the filtered signal until the cutoff frequency is below 12 KHz. Such are the ravages of age. My kids can still hear tones above 15 KHz. (I've had fun annoying them by playing high pitched sounds.) But these are just more anecdotes that are no more useful than the marketing claims of those selling high resolution equipment.

vqworks's picture

Everyone's certainly entitled to their own conclusions.

In your words, you said "if the frequencies of the sources were audible then a person would of course hear the modulation of the sound level as the source signals interfere (the "beat frequency"). But if they cannot hear the pitches in the first place there is no sound to modulate."

I would have readily accepted that statement because it relates perfectly to the generally accepted conclusion that humans can only hear from 20Hz to 20kHz. But the 1966 review of the open reell deck that I mentioned can be found here:


The Test Results section on the second page states very clearly that there was an audible beat between 18 and 20 kHz between the input signal and the bias. The bias frequency used for recording was 70 kHz (they describe this as a push-pull oscillator between the record amp and VU meter). 70 kHz is not supposed to be audible yet it's beating with a frequency somewhere between 18 and 20 kHz, which is not likely audible for the reviewer either if he was over 30 years old. Although the review doesn't specifically say that the 70 kHz oscillator is for the bias circuitry, it's safe to say that it is because bias is an ultrasonic tone recorded onto tape to make the recording more linear. It's also safe to assume that the reviewer didn't put a transducer directly on his head. True, the reviewer did say that the normal energy content of music is unlikely to result in the audible beat with the bias tone but it's not impossible (the key word is "normal"). By the 80s, cassette deck manufacturers were using extra high bias frequencies (100 to 150 kHz) to specifically reduce the likelihood of audible beating with audible frequencies.

The study you cited predates David Blackmer's by quite a few years. But I can say that the findings of studies often conflict with one another.

At this point, I want to clarify that I'm really not asserting anything; just revealing extra information that I found.

You're not old but at your age your hearing is above average. I'm going to annoy MY kids now.

Ethyl's Fred's picture

That signal is almost certainly due to nonlinearities in the recording process. Magnetic hysteresis, the mechanism used in magnetic recording, is inherently non-linear. Using appropriate bias and keeping the recording level well below saturation minimizes the nonlinearities, but they will always be there. The fact that the intermodulation product was so low that it was inaudible compared to the signal is fully consistent with small but not zero nonlinearities in the recording process.

This review actually provides good evidence of why you don't want ultrasonic information in the signal. It can only add unwanted artifacts in the audible frequencies if the reproduction system deviates from linearity at any stage (which it must to some level, particularly in the transducers).

vqworks's picture

Regardless of the inevitable non-linearities of of the magnetic recording process, the reviewer heard a beat at a very high frequency (between 18 to 20 kHz). I haven't heard of anyone even in their mid-30s being able to hear that high. This is coming from a reviewer who wrote for Audio Magazine. Back in the 60s, that publication was extremely objective. There was usually no subjective comments about sound quality until the 70s and 80s.

The beat would have also been detected while testing the record/play frequency response at -10 dB. That's definitely and sufficiently low for the tape to avoid any saturation in the audible range. Any distortion in the extreme highs at that level can't be any higher than 1%.

I wonder why tape deck manufacturers in the 70s and 80s were intentionally designing their products to reproduce ultrasonics. Would they bother with this just for marketing purposes?

I will say that filtering out any ultrasonic frequencies is a good idea if the electronics and speakers (maybe the worse offenders) in the signal chain produce unwanted and audible side effects. Of course, if the entire signal chain didn't produce these problems, why bother filtering?

I wonder what Al Griffin thinks at this point.