Wednesday, December 18, 2024

LXdesktop Auralization with Ambisonics

In this post I describe yet another approach for producing a more natural rendering of stereo recordings via headphones. Previously I was exploring the idea of adding the features of speaker playback that we usually find missing on headphones: the room reverb, the natural cross-feed between signals of the speakers when they reach ears, the effect of head and torso reflection, etc (see old posts like this one). The results were sounding to me more pleasing and more spatially "correct" than any commercial spatializer products I have tried. However, when staying in front of my LXdesktop setup while wearing headphones, and switching back and forth between playback over the speakers and the headphones, the difference is still obvious, with the speaker sound being so much wider and having more depth, while the playback via headphones is always sounding too close to the head, both in forward and lateral dimensions.

The new approach that I have tried and describe in this post is based on a multi-microphone capture of the sound field that the speakers and the room create, representing it as 3rd order Ambisonics, and then using binaural renderers. This rendering has ended up sounding surprisingly realistic, finally allowing to create with headphones an illusion that I'm listening to my speakers. Another interesting point that I will discuss is that, in some sense, this rendering can sound even better than the speakers used for making it.

Why Ambisonics?

Let's start with a simple question—why using Ambisonics at all, when the end result is still a two channel binaural rendering. Why go through this intermediate representation if we could just make a binaural capture of the speakers using a dummy head in the first place? We probably need to disambiguate what is meant by the "capture" here. Binaural recordings are a great way for capturing an event and that we want to listen to later. When we are listening back at the same place where the recording was taken, it usually sounds very convincing, as if we are hearing a real event. The cheapest available solution for such a capture is in-ear microphones, and I had tried using the mics by The Sound Professionals for this purpose.

The "event" that is being captured does not necessarily need to be a real performance, it can as well be a playback of a song over the speakers. The benefit of using speakers is that we can capture their properties and use them to listen back not only to this particular song as played by these speakers, but to any arbitrary song played by them. For that what we need is a capture of impulse responses (IRs) and apply them to a stereo recording via convolution. In the simplest case, we just need 2 pairs of IRs: one pair for left speaker to the left and to the right ear, and another pair for the right speaker. These IRs are usually called "BRIRs": "Binaural Room Impulse Responses."

The problem is that making a good capture of BRIRs using a live person's head is very challenging. The main difficulty is that due to noisiness of the environment and relatively long reverberation times of real rooms, the sweep signal used for the capture has to last for about 30–60 seconds, and the head must stay completely still during this time, otherwise the information about high frequencies will become partially incorrect.

As a side note, the same binaural capturing process is used for acquiring HRTFs of real persons, however, such an acquisition is done in anechoic chambers, thus the sweeps can be quite short—under 1 second, and there is usually some kind of a fixture used for preventing the subject's head from moving.

As another side note, I know that with Smyth Realiser they can actually capture BRIRs in customers' rooms in a "high fidelity" way, but it must be their "know how" and it is not trivial to match with a DIY approach.

Now, what if in our room, instead of a real person we use a dummy head, like Neumann KU100? That assumes it's OK for you to buy one—due to their "legendary" status in the audio recording industry they are very pricey. If you have got one, it is of course much easier to make a stationary setup because the artificial head is not moving by itself, it is not even breathing, thus capture of a good quality BRIRs is certainly possible.

However, the recorded BRIRs can not be further manipulated. For example, it is not possible to add rotations for head tracking, as that would require capturing BRIRs for every possible turn of the head (similar to how it's done when capturing HRTFs). So, another approach which is more practical, is to capture the "sound field" because it can be further manipulated, and then finally rendered into any particular representation, including simulation of a dummy head recording. This is what Ambisonics is about.

The Capture and Rendering Process

I've got a chance to get my hands on the Zylia Ambisonics microphone (ZM-1E). It looks like a sphere with the diameter slightly less than of a human head, and has 19 microphone capsules spread over the surface (that's much more than just two ears!). The Zylia microphone has a number of applications, including capturing of small band performances, recording sound of spaces for VR, and of course producing these calming recordings of ocean waves, rain, and of birds chirping in woods.

However, instead of going with this mic into woods or to a seashore, I stayed indoors and used it to record the output of my LXdesktop speakers. This way I could make ambisonics recordings that can be rendered into binaural, and even have support for head tracking. The results of preliminary testing in which I was capturing the playback of speakers have turned out to be very good. My method was consisting of capturing raw input from the ZM-1E—this produces a 19-channel recording, and this is what is called the "A-format" in the Ambisonics terminology. Then I could experiment with this recording by feeding it to the Ambisonics converter provided for free by Zylia. The converter plugin outputs 16 channels of 3rd order Ambisonics (TOA) which then can be used with one of the numerous Ambisonics processing and rendering plugins. The simplified chain I have initially settled upon is as follows:

A practical foundation for the binaural rendering of Ambisonics used by the IEM BinauralDecoder from the "IEM Plug-in Suite" is the result of the research work by B. Bernschütz which is described in the paper "A Spherical Far Field HRIR/HRTF Compilation of the Neumann KU 100". They have recorded IRs for the KU100 head which was rotated by a robot arm before a speaker in an anechoic chamber. Below is a photo by P. Stade from the project page:

This allowed producing IRs for the number of positions on a sphere that are enough for producing a realistically sounding high order Ambisonics rendering. The work also provides compensating filters for a number of headphones (more on that later), and among them, I have found AKG K240DF (the diffuse field compensated version) and decided to use it during preliminary testing.

Since the binaural rendering is done using a non-personalized HRTF, in order to help the auditory system to adapt, it's better to use head tracking. Thankfully, with Ambisonics the head tracking can be applied quite easily with a plugin which is usually called "Rotator". The IEM suite has one which is called SceneRotator. It needs to connect to the head tracking provider which is usually a standalone app via the protocol called OSC. I have found a free app called Nxosc which can use the Bluetooth WavesNX head tracker that I had bought a long time ago.

Somewhat non-trivial steps for using IEM SceneRotator with Nxosc are:

  1. Assuming that Nxosc is already running and has been connected to the tracker, we need to connect SceneRotator to it. For that, click on the "OSC" text at the left bottom corner of the plugin window. This opens a pop-up dialog and in it, we need to enter 8000 into the Listen to port field, and then press the Connect button.
  2. Since our setup is listener-centric, we need to "flip" the application of Yaw, Pitch, and Roll parameters by checking "Flip" boxes under the corresponding knobs.

As a result, the SceneRotator window should look like below, and the position controls should be moving according to the movements of the head tracker:

And once I made everything working... The reproduced renderings sounded astonishingly real! In the beginning, there were lots of moments when I was asking myself—"Am I actually listening to headphones? Or did I forget to mute the speakers?" This approach to rendering has turned out to be way more realistic than all my previous attempts, finally I have got the depth and the width of the sound field that matched the speaker experience—great!

A-format IRs

And now the most interesting part. After I had played enough with ambisonic captures of speaker playback, I decided that I want to be able to render any stereo recording like that. Sure, there are enough plugins now that can convert stereo signals into ambisonics and synthesize room reverb: besides the plugins from the IEM suite, there is SPARTA suite (also free), and products by dearVR. However, what they synthesize is some idealized speakers and perfect artificial rooms, but I wanted my real room, and my speakers—how do I do that?

I realized that essentially I had to get from two stereo channels to Ambisonics. Initially I was considering capturing IRs of the output from the Zylia's Ambisonics plugin for my speakers. However, I was not sure whether the processing that the plugin performs is purely linear. Although the book on Ambisonics by F. Zotter, M. Frank describes the theory of the transformation of A-format into B-format as a matrix multiplication and convolution (see Section 6.11), I was still not sure whether in practice Zylia does just that, or it also performs some non-linear operations like decorrelation in order to achieve a better sounding result.

Thus, I decided to go one level of processing up, and similar to BRIRs, capture the transfer function for each pair of my speakers and the microphone capsules of the Zylia mic. This is for sure an almost linear function, and it is fully under my control. However, that's a lot of IRs: 2 speakers by 19 microphones makes it 38 IRs in total! And in order to be correctly decoded by Zylia's plugin these IRs have to be relatively aligned both in time and in level, so that they represent correctly the sound waves that reach the spherical microphone and get scattered over its surface.

Thankfully, tools like Acourate and REW can help. I have purchased the multi-mic capture license for REW. This way, I could capture inputs to all 19 microphones from one speaker at once, and what's important, REW maintains the relative level between the captured IRs. However, maintaining time coherence is more tricky. The problem is that REW sets the "start" of the IR at the sample with the maximum value which usually corresponds to the direct sound of the speaker. However, for IRs captured by the mics on the rear part of the sphere, the power of the reflected sound from room walls can actually be higher than the power of the direct sound, thus the "start" position of the IR chosen by REW is not always correct. That's one problem.

Another problem is aligning the levels between the captures of the left and the right speaker. Since they are captured independently, and REW normalizes the captured IRs, the resulting levels between the left and the right speakers may be mismatched.

In order to solve both of these alignment problems, I have captured with the same Zylia setup periodic pulses played in turn by the left and the right speakers. Then by convolving the produced IRs with the original test signal, and comparing the result with the actual recorded signal, I could see whether the timing and level of IRs is correct.

I have performed this alignment manually, which of course was time-consuming. I think, if I ever want to do this again, I would try to automate this process. However, it was actually interesting to look at the waveforms because I could clearly see the difference between the waveforms that were actually captured in the room, and the ones obtained by a convolution with the captured IRs. They actually look very different, see an example below:

What we can see is that the pulse rendered via convolution (red trace) has much lower noise, and looks more regular than an actual capture (green trace). This is because the IR only captures the linear part of the acoustic transfer function. There is actually a whole story on how captures done with the logarithmic sweep relate to the parameters of the original system, I will leave it for future posts. In short, by rendering via the captured IR, we get rid of a lot of noise, and leave out any distortions. So in some sense, the produced sound is a better version of the original system which was used for creating the IRs.

After producing these IRs, I enhanced the rendering chain in order to be able to take a stereo input, and produce 20 (19 mic channels plus 1 silent) channels of audio which emulate the output from the Zylia's microphone, that is the A-format, which is then fed into the Zylia's ambisonics decoder (note that on the diagram we only count channels that actually have signal):

It was an unusual experience for me to create a 40-channel track in Reaper, that's in order to duplicate the left and the right stereo signals into 20 channels for applying the A-format IRs. However, it worked just fine. I truly appreciate robustness and reliability of Reaper!

One issue remained though. Since Zylia is not a calibrated measurement microphone, it imprints its own "sound signature" on the captured IRs. Even by looking at transfer functions of these IRs I could see that both bass and high frequencies have some slope which I did not observe while performing the tuning of LXdesktop. Thus, some correction was needed—how to do it properly?

Zylia Capture and Headphones Compensation

Yet another interesting result of the "Spherical Far Field HRIR/HRTF Compilation" paper is that if we take all frequency responses for all these positions captured by the KU100 head around the sphere, and average them, the result will deviate from the expected diffuse field frequency response by a certain amount, as we can see on the graph below, adapted from the paper, which is overlaid with the graph from the KU100 manual showing the response (RTA bars) of the KU100 in a diffuse field:

I used the actual diffuse field frequency response as the target for my correction. The test signal used in this case is uncorrelated stereo noise (zero interchannel correlation, see notes on ICCC in this post). Note that the "Headphone Equalization" in IEM BinauralDecoder has to be turned off during this tuning.

And this adjustment works, however applying it also "resets" the speaker target curve, so I had to re-insert it into the processing chain. With that, the final version of the processing chain looks like this:

Now if we turn back on the headphone equalization in the IEM BinauralDecoder, it will compensate both for the deviations from the diffuse field for the KU100 head and the selected headphones.

It's funny that although AKG K240DF is described by AKG as a diffuse field compensated headphone, in reality, however deviations as much as 3 dB from the diffuse field response across frequency bands is a norm. Usually the deviations exist on purpose, to help make listening to "normal" non-binaural stereo recordings more pleasant and in addition to produce what is known as the "signature sound" of the headphone maker.

Would Personalized HRTF Work Better?

I was actually surprised by the fact that use of non-personalized HRTFs (even non-human, since they were captured on a KU100 artificial head) works that well for giving the outside-of-head sound experience, and sufficiently accurate localization, including above and behind the head locations. I was wondering what an improvement would be if I actually had my own HRTF captured (I haven't yet, unfortunately) and could use it for rendering?

One thing I have noticed with the non-personalized HRTF is that tonal color may be inaccurate. A simple experiment to demonstrate this is to play mono pink noise and then pan it, with simple amplitude panning, to left and right positions. On my speakers, the noise retains its "color", but in my binaural simulation I can hear difference between the leftmost and the rightmost positions. Since I'm not a professional sound engineer, it's hard for me to equalize for this discrepancy precisely "by ear", and obviously, no measurement would help because this is how my auditory system compensates for my own HRTFs.

By the way, while seeking information for some details related to my project I have found a similar research effort called "Binaural Modeling for High-Fidelity Spatial Audio" done by Viktor Gunnarsson from Dirac which helped him to obtain a PhD degree recently. The work describes a lot of things that I was also thinking of, including the use of personalized HRTFs.

The fun part is that Viktor also used the same Zylia microphone, however unlike me, he also had access to a proper Dolby Atmos multichannel room, you can check out his pictures in this blog post. I hope that this research will end up being used in some product intended for end users, so that audiophiles can enjoy it :)

Music

Finally, this is a brief list of music that I had sort of "rediscovered" while testing my new spatializer.

"You Want it Darker" by Leonard Cohen which was released in 2016 shortly before his death. I'm not a big fan of Cohen, but this particular album is something that I truly like. The opening song creates a wide field for the choral, and Cohen's low voice drops into its net like a massive stone. It makes a big difference when in headphones I can perceive his voice as being at a hand distance before me, as if he is speaking to me, instead of hearing it right up to my face, which feels a bit disturbing. This track helps to evaluate whether the bass is not tuned up too much, as otherwise the vibration of headphones on your head can totally ruin the feeling of distance.

"Imaginary Being" by M-Seven, released in 2011. It was long before the Dolby Atmos was created, but still sound of this album delivers a great envelopment, and it feels nice being surrounded by its synthetic field. My two favorite tracks are "Gone" and "Ghetto Blaster Cowboy" which creates an illusion of being in some "western" movie.

"Mixing Colours" by Roger and Brian Eno, released in 2020 is another example of enveloping sound. Each track carries a name of a stone, and they create a unique timbre for each one of them. Also, Brian Eno is a master of reverb which he uses skillfully on all of his "ambient" albums, including this one, giving a very cozy feeling to the listener.

The self-titled album by Penguin Café Orchestra, released in 1996. I have discovered this band by looking at the list of releases done by the Brian Eno's experimental "Obscure Records" label. The music is rather "eccentric" and sometimes experimental—check the "Telephone and Rubber Band" track for example. However, overall it's a nice work, with lots of different instruments used, which are placed in the sound field around the listener.

"And Justice for All" by Metallica, released in 1988—not an obvious choice after the selection of albums above—this one was suggested by my son. After I have completed setting up my processing chain and wanted to have an independent confirmation that it really sounds like speakers, I have summoned my son and asked him to listen to any track from Apple Music. And he chose "Blackened". In the end, it has turned out to be a good test track for number reasons: first, since it's the "wall of sound" type of music, with lots of harmonic distortion, it's a great way to check whether the DSP processing chain is clean, or does it add any unpleasant effects ("digitis" as Bob Katz calls it) that would become immediately obvious. Second, again it's a good test to see if the amount of bass is right in the headphones. As one usually wants to listen to metal at "spirited" volume levels, any extra amplification of bass frequencies will start shaking the headphones, ruining the externalization perception. And the final reason is that listening in the "virtual studio" conditions that binauralized headphone sound creates allows to hear every detail of the song, so I got a feeling that during more quiet passages of "Blackened"—for example the one starting after the time position 2:50—I can hear traces of the sound of Jason Newsted's bass guitar which was infamously brought down during mixing of this album. Later I have confirmed that it's indeed the bass guitar by listening to a remix done by Eric Hill on YouTube which had brought its track back to a normal level.

Conclusions and Future Work

This experiment of creating a binaural rendering of my desktop speakers using an ambisonics microphone has demonstrated a good potential for achieving high fidelity realistic headphone playback at relatively low cost. In future this is what I would like to explore and talk about:

  • use of other models of headphones, especially those that are missing from the dataset captured by B. Bernschütz and thus do not present in the headphone equalization list of the IEM BinauralRenderer plugin;
  • additional elements of processing that improve the sound rendered via this chain;
  • use of other head trackers, like HeadTracker 1 by Supperware;
  • individualization of the headphone transfer function and binaural rendering;
  • ways to compare the headphone playback to speaker playback directly.

Also, one thing I would like to experiment on is doing processing of the left and the right speaker signals separately. Currently we mix the signals from the speakers at the input of the simulated Zylia microphone. However, since the microphone is significantly smaller and is less absorptive compared to a human head and torso, it does not achieve the same level of cross talk attenuation that I have on my physical speakers. So the idea is that we should try processing the input from the left and the right speakers separately, in parallel. And only after we get to the Ambisonics version of the signal, or maybe even to the binaural rendering, we mix these signals together, providing necessary cross-talk cancellation. I'm not sure how to do that in the Ambosonics representation, but in the binaural version it's pretty easy because we will end up having simulated BRIRs, that is, two pairs of two channel signals, so we can attenuate the contra-later signal as desired. In some sense, this approach simulates a barrier installed between the ears of the listener, but without the need to deal with side effects of an actual physical barrier.

Monday, August 26, 2024

LXmini Desktop Version (LXdesktop)—Part III: Binaural Tuning

This is another "technical note" about my experience of building and tuning a pair of desktop speakers with a DSP crossover, based on the original design of the LXmini by Siegfried Linkwitz. This post is about the aspect of tuning which helps to obtain the most natural presentation of the audio scene encoded as a stereo format. As Linkwitz itself explains in this talk "Accurate sound reproduction from two speakers in a living room", a stereo representation is no more than an illusion which only appears in the brain of the listener. However, this can be a rather realistic illusion. It's realistic when the listener is able to forget that the sound scene which he or she is hearing is created using the speakers. Ideally, the speakers themselves should not be registered by the auditory brain as the sound sources.

In the Q&A section of the talk, in particular, on this video fragment, somebody is asking Siegfried what is the recommended speaker setup for a small room. And he recommends putting the speakers wider, and to place the listening spot closer to them. That's in fact what I've done in my setup (see one of the previous posts which illustrates the arrangement). The idea behind this setup is to create sort of "giant headphones"—this characteristic is attributed in to the sense of envelopment that this setup can achieve. In fact, the sound of speakers located at some distance is more natural for our auditory brain than the sound from headphones because the sound from the speakers gets filtered by our natural HRTF, thus it's easier for the brain to decode it properly. However, our perception of sound from these "giant headphones" suffers both from a strong interaction between the speakers themselves—that's the crosstalk affecting the center image, and between the speakers and the room—this interaction produces reflections and additional reverb not existing on the recording—that's the "sound of the room."

The good part is that in my unusual setup the predominant dipole radiation pattern of the speakers is supposed to reduce the crosstalk without resorting to DSP tricks. And as for reflections, they can be filtered out by the auditory brain when they are sufficiently separated from the direct sound, and—that's another interesting point from Siegfried's talk—has the spectral content which is similar to the direct sound. The last topic is actually complex and different people have different views on it. However, the cross-talk cancellation is something that can be easily measured.

Cross-talk Cancellation

I have made two types of measurements: one is the usual log sweep which allows recreating the impulse response and window it as necessary, and another kind is a "steady state" measurement produced by taking "infinitely" averaged RTA of a pink noise. Both measurements are made using a "dummy head" technique, so they are binaural. However, since I don't have a proper head and torso simulator at home, I just use my own head and binaural microphones by The Sound Professionals built with the XLR outputs option so that they can be connected to the same sound card used to drive the speakers. I use REW for these captures, and I have purchased the "multi-mic input" option which is essential for this job since I want to record both the ipsi- and contra-lateral ear inputs at the same time.

The typical way to measure the effectiveness of the cross-talk cancellation (XTC) efficiency is to consider the measurement at the ipsilateral (closer to the speaker) ear and see by how much must it be attenuated in order to obtain the same result as the measurement at the contralateral (farther from the speaker, shadowed by the head) ear. The resulting frequency response curve is the profile of the effective attenuation.

So let's see. If we look at the steady state response, the XTC from my speaker arrangement is quite modest—around -10 dB in the best case. Below is the spectrum of the attenuation for the left and for the right ear:

However, if we look at the direct sound only, by applying a frequency-dependent window (FDW) of 8 cycles to the log sweep measurement, results look much better, showing consistent attenuation values between -20 and -10 dB. It works better for one ear due to asymmetry of the setup:

Note that deep notches as well as a couple of peaks are due to comb filtering from reflections and the effects of the dipole pattern itself. I must warn that just looking at what seems to the eye as the "average value" on the graph and taking this as the suppression efficiency measure may be self-deceiving. In order to calculate the actual negative gain of the XTC I have measured the difference in RMS level of pink noise filtered via the impulse responses of these natural attenuation filters. The results are somewhat modest: -4.7 dB for the sound of the left speaker and -4.9 dB for the sound of the right speaker.

For comparison of performance with DSP XTC solutions, I have checked the Chapter 5 of the book "Immersive Sound" which talks about BACCH filters. There is a graph of a similar measurement that I have done, they have made it using a Neumann KU-100 dummy head in a real (non-anechoic) room using speakers set up at 60 degrees, 2.5 meters distance from the head, with their filters turned on. The Figure 5.12 of the book presents the measured spectrum at the ipsi- and contra-lateral ears, and similarly they measure the effectiveness of the XTC by subtracting these. I have digitized their graphs and derived the attenuation curve, it is presented on the graph below as the brown solid curve, and I have changed my curves to dashed and dotted lines for readability purposes:

We can see that the BACCH XTC does a better job, except in the region of 7–10 kHz. Also note that since I have a single subwoofer there is no attenuation below 100 Hz. The author of the chapter calculates the level of attenuation as an average of the frequency response curve values across the audible spectrum, and their result is -19.54 dB. However, since I had digitized their graph, I could build a filter which implements it and measure the resulting decrease of the RMS level of ping noise, the same method that I used for my measurements. Measured this way, the effective gain of the BACCH XTC is -8.86 dB. This is still better than my result, but only by 4 dB. So I must admit, DSP can do better job than natural attenuation due to the speaker arrangement and the radiation pattern, however as we can see from the chapter text, building a custom XTC filter tailored to the particular setup is a challenging task, and there are many caveats that need to be taken care of.

Center Image Height

As I have explained in the section "Target Curve Adjustments" of the previous post, in order to provide correct rendering of the center image, the spectrum of the sound from the speakers which are on the sides of the listener must be corrected so that the phantom center image has the spectrum that a real center source would have. The paper by Linkwitz which I cited in that post contains necessary details. One good test for the correction is to make sure that a source which is intended to be at the ears (or eyes) height is actually heard this way. For that, I use the track called "Height Test"—track 46 from the "Chesky Records Jazz Sampler & Audiophile Test Compact Disc, Vol. 2".

Merging Perceived Results of ITD and ILD Panning

Changing the spectrum of side images in the way described in the previous section also helps to reduce attention to the speakers, because now sounds coming from them do not have the spectral characteristic of a side source.

However, while listening to old engineered recordings (from 70s or earlier) that use "naive" hard panning of instruments entirely to the left or to the right by level adjustment only, I have noticed that this spectrum change is not enough for decoupling the sound of an instrument from the speaker which is playing it. Real acoustic recordings and modern tracks produced with Dolby Atmos sounded better. This was likely because modern panning techniques use both level and delay panning. They may actually use more—to get a full idea of what is possible I used a panning plugin called "PanPot" by GoodHertz.

While playing with a plugin using dry percussion sounds from the "Sound Check" CD produced by Alan Parsons and Stephen Court I have noticed that hard panned sounds using delay panning a perceived a bit "away" from the speakers while level panned sounds are perceived coming from the speaker itself. Schematically, it was perceived like this:

I decided to combine them. In order to move hard ILD panned sounds I use the "Tone Control" plugin, also by GoodHertz. It can do Mid/Side processing, and I switched it to the "Side only" mode. Recall from my previous post on Mid/Side Equalization that M/S decomposition does not completely split out the phantom center from the sides. However, it is good enough to tune hard panned sources.

I have prepared a test track which interleaves pure ILD and ILD+ITD panning of a dry sound of a snare drum. While listening to it, I was experimenting with the settings for the corner frequency, slope, and gain of the treble shelf, as well as with overall gain of the side component. The goal was to move the ILD panned source closer to the position of an ILD+ITD panned source, and at the same time not to change its perceived tonality too much. Obviously the results of panning using different techniques will never sound identically, however, I could come close enough. As a result, the sound scene has moved a bit away from me, behind the speakers plane:

I have pictured the scene as going beyond the speakers because this happens with some well-made recordings like the track 47 "General Image and Resolution Test" from "Chesky Records Jazz Sampler & Audiophile Test Compact Disc, Vol. 2" where the sounds of doors being shut are rendered well beyond the speakers in a distance.

It's interesting that correction of the purely level-panned images really helped to decouple the speakers from the sound they are producing! I used tracks from the full OST to "The Holdovers" movie which feature a number of records produced in 60s and 70s. Note that as far as I know, the full version with all tracks is only available on vinyl—the usual issue with licensing on streaming services prevents them from offering all the tracks. And the producer of the OST decided not to bother themselves with offering a CD.

Banded Correlation and Decorrelation

Since my speaker system is not a dipole across the entire spectrum, and walls are located nearby, there was still some "unnaturalness" of the image, even though the quasi-anechoic frequency response looks correct. How can we do further tuning without noticeably affecting the frequency response? The trick is that we can change it depending on the correlation.

For example, while listening to the bass line of the "Spanish Harlem", I have noticed that the first note, which is mostly delivered by the subwoofer does not sound as strong as following notes, which are higher and are delivered by the main speakers. I did not want to raise the level of the sub, because I know it is at the right level, and listening to OSTs by Hans Zimmer proves that—I don't want the sub to be any louder :). Instead, my solution was to decrease level of the correlated component (the phantom center) in the frequency range served by the woofers—they are omnidirectional, thus their sound is reinforced by the walls. For that I used the "Phantom Center" plugin by Bertom Audio.

Another correlation tweaking needs to be done in the high frequency region, above 4.7 kHz. I took the track I often use—the left / right imaging test from "Chesky Records Jazz Sampler Vol. 1" and overlaid the "midway" position announcement with "off-stage" position. Initial lack of correlation due to somewhat excess of reverberation at high frequencies causes the off-stage announcement to sound either in the front, in the position similar to the "midway" position, or even "inside the head." By increasing the correlation I was able to move it to the intended location. However, having too much correlation causes the phantom center to become too strong and too narrow, which makes the "midway" position to collapse to close to the center. Thus, by hearing both announcements at the same time I can increase the correlation to the just right amount.

Finally, I used a set of 1/3 octave band-filtered dry mono recordings of percussion instruments converted to stereo: first with identical left and right channels, then with the right channel inverted. This is the same set of sounds that I used in this post about headphones. I compared how loud the correlated version sounds to relative to anti-correlated. It is expected that it should be of the same loudness or a bit louder, however I have found that in the region between 400 and 900 Hz anti-correlated sounds are perceived to be louder than correlated. Unlike my previous experience with traditionally arranged speakers, this time I was able to reduce loudness of anti-correlated sounds in this band.

This perceptual correction helps to reduce attention paid to the details in the sound produced by speakers that get amplified by the room too much. The sound becomes less fatiguing—that's yet another aspect of "naturalness." As Linkwitz has put it, it's better to make our brain to add missing details than trying to force it to remove extra details—that costs much more mental effort which manifests itself in exhaustion resulting from long listening sessions.

Processing Chain

The description of the tuning process has turned out to be a bit lengthy. Let's summarize it with a scheme of the filters that I put on the input path. They are inserted before the digital crossover and correction filters that I described in the Part II of these post series.

So, first there is the Tone Control applied to the "Side" part of the M/S decomposition, which is intended to move ILD-panned sounds a bit deeper down the virtual scene to match ITD+ILD-panned sounds. Then there go 3 instances of the Phantom Center plugin tuned at different frequency bands that perform the job of correcting the effects of the room-speakers interaction. I wish there was kind of an "equalizer" plugin that could apply phantom center balancing to multiple bands—Bertom Audio, take a note :)

Some Tracks to Listen To

Having achieved a good imaging through my speakers, I had re-listened to many albums that I have pinned down in my collection. Here are some tracks that I can recommend listening to:

  • "The Snake and The Moon" from the "Spiritchaser" album (2007) by Dead Can Dance. It starts with a buzzing sound rotating around the head. The rhythm of the song is set by an African drum pulsating at the center, and there are other instruments appearing at different distances and locations.
  • "The Fall of the House of Usher" multi-track piece from "Tales of Mystery and Imagination - Edgar Allan Poe" album (1976) by The Alan Parsons Project. Alan Parsons is known for his audio engineering excellence, and this debut album of his own band features rich and interesting combination of various kinds of instruments: synths, guitars, live choir etc. This album was at some point re-issued in a 5.1 version, but I still enjoy it in stereo.
  • "Spark the Coil" from "We Are the Alchemists" joint album (2015) by Architect, Sonic Area & Hologram is a rhythmical electronic piece with a very clear sound and precise instruments placement.
  • "Altered State" from the "360" (2010) album by Asura. Nice melodic piece with ethnic instruments and ambient electronics. The track produces good enveloping effect and delivers well positioned instruments.
  • "Fly On the Windscreen (Final)" from the "Black Celebration" (1986) album by Depeche Mode. Although the recording technique used here is not very sophisticated—one can hear that panning of some instruments is done with level only—it's interesting to listen into different kinds of reverbs applied to sound effects, instruments and vocals.
  • "Prep for Combat" from the "AirMech" game soundtrack released as the 2012 album by the industrial band Front Line Assembly. It uses rather powerful sounding electronic sounds that are panned dynamically and fly around the listener.
  • But of course, when we speak about powerful sound, it is hard to compete with Hans Zimmer whose "Hijack" track from the "Blade Runner 2049" OST (2017) is sending shivers down the spine and can be used as a great test of how well the subwoofer(s) is/are integrated with the main speakers.
  • "Zen Garden (Ryōan-ji Temple Kyoto)" from the classic "Le Parc" album from 1985 by Tangerine Dream starts with ambient sounds of wind and then adds gentle percussion sounds carefully panned along the virtual sound stage. I'm not sure which instruments are synthesized and which are real, but they all sound quite natural, with their locations well-defined.
  • And finally, one track which is not music but rather the recording of rain—featured at the very end of the movie "Memoria" (2021). I keep listening to it over and over again, especially in the evenings. It feels like you are sitting on a porch, watching rain and listening to delicate yet powerful rumbling of the thunder in the distance. It's funny that the track ends with someone coming up to the recording rig (you can feel their breathing) and turning it off—not sure why they did not cut this out during post-production, but it definitely enhances the feeling of realism of the recording :)

Wednesday, July 17, 2024

Adding Bass Traps

As I have mentioned in my previous post about setting up the LXdesktops—the desktop version of LXmini with custom tuning—I ordered some bass traps in order to try to improve reverberation times and maybe deal with the room modes. This is a short report on what I managed to achieve as well as an interesting point about over-optimization when tuning speakers from the listening position only.

In my room I already have absorbers on the walls behind and on the left side of the speakers (the right wall is further away and has a door), and one on the ceiling above the desk. These are mounted on walls 2 inch "FreeStand" absorbers by GIK Acoustics, except for one which is 4 inches because it is located very close to the left speaker and must absorb way more energy:

Besides these absorbers, I also already had one "Soffit" bass trap, residing on the room's closet. In addition to these "engineered" absorbers, there are also two "environmental" ones: a twin bed with a thick 6 inch mattress, and a rug on the floor. Yet, the room is not "dead" and still have enough reflective surfaces, as well as a lot of potential for creating room modes. The room is shaped quite irregularly and has a partially slanted ceiling, thus calculating room modes analytically is not very easy.

To all that acoustic treatments I decided to add 3 more bass traps: one big soffit bass trap, essentially the same as the one I already have, and two smaller traps called "Monster traps" by GIK. After the traps have arrived, I've made a "before" measurement, and an "after" measurement. Here are some comparisons.

Reverb Time

In order to analyze changes in the reverb time I made measurements both with Acourate and REW. Acourate displays them using smooth curves, and also provides reference "corridors" from two standards: DIN 18041 and EBU 3276 calculated from the room size. Below are the "before" and "after" RT60 graphs done by Acourate with the corridors from the EBU 3276 "studio" profile:

As we can see, the reverb time of the bass has lowered by 0.1 ms—not much. However, it's interesting that the reverb of the rest of the frequency spectrum has also lowered, and now it almost fits the upper range for the "studio" corridor. Thus, these wide range bass traps also have a good effect on taming high frequencies. By the way, for the big soffit bass trap GIK has an option of installing a "limiter" (FlexRange) which is intended to reduce this effect, but in my case I didn't need it.

In REW, the spectrograms also have become more uniform. Below are, again, before and after for the left and the right speaker:

So, there is indeed a noticeable effect on the reverb time, however it is not dramatic. As for the room modes, there was one interesting effect described in the following section. By the way, here is an old but useful review of active bass traps by B. Katz where we can see that active traps are more effective, however they are usually more pricey than passive ones.

The Over-Correction Effect

So, one interesting thing that I noticed while placing the new bass traps and re-measuring is the anomaly in the frequency response of the right speaker. Here is how this frequency region is looking before and after installing bass traps:

It is noticeable that the region has become less "regular". Also, the notch from a room mode has become less deep. I also recalled that this region has unusually high distortion which I was explaining to myself as a result of an interaction with a room mode:

So, definitely something is going on with room modes here. Maybe addition of bass traps has decreased the effect of one "negative" mode at approximately 97 Hz and this has resulted in a "swelling."

Another interesting point was that when listening to the log sweep from a side I could clearly hear that the driver is "overworking"—it definitely had a boost which clearly was not needed. I decided to make a near field measurement of the woofer right by the driver. As I have expected, the driver actually is boosting a range around 77–122 Hz. I realized that this likely comes from the fact that at the microphone position there was a cancellation in this region resulting from room modes, which has led to extra boosting when doing the woofer correction. Indeed, when I looked at the filter, it had a hump there. Using a near field measurement, I have created an inversion of the hump and applied it to the correction filter:

After that, the measurement at the listening position started looking less ideal, however two facts was pointing that this tuning is more correct. First, when listening from the side to the log sweep, the region around 97 Hz was now sounding more even. Second, the distortion which I was initially observing has gone:

That's an important lesson for me. Although frequency dependent windowing helps to emulate a measurement in an anechoic chamber in the aspect of cutting down the effect of reflections, it can't eliminate the effect of room modes. This may sound trivial, however apparently this consideration did not cross my mind when I was doing the tuning. Thus, at least for woofers it's actually important to compare measurements from the near field and from the listening position in order to avoid over-correction.

Servos Are The Answer?

What is interesting is that I had these issues with room modes with the speakers, but not with the subwoofer. The subwoofer was producing a very smooth response almost right from the start. This is an interesting feature of the Rythmic subwoofer. I also have a simpler subwoofer by KRK, and it is not as easy do deal with in an untreated room.

What is so special about Rythmic is that it uses the "Direct Servo" technology. The idea is that active electronics has feedback from the driver coil, and can "notice" when room modes are "helping" the driver (with a resonance), or vice versa, and correct the driver gain for that. This requires a specially built driver with an extra coil, but I think it's worth it.

One drawback of the active correction that I can think of is that signal-dependent corrections are essentially produce a non-linear behavior and thus add distortion (see my old post on the automatic gain control). However, for bass that is likely not a big issue. So one idea that has come to my mind for the next generation of my LXmini mods is to try to use a servo driver for the woofer. Would be interesting to see if this will help to deal with room modes.

Saturday, July 6, 2024

LXmini Desktop Version (LXdesktop)—Part II: DSP Tuning

This post continues my story about the desktop version of LXmini speakers that I have built and set up on my computer desk in a somewhat unusual way:

So, why are the speakers are "toed out"? The idea is that since the full range driver has a dipole dispersion pattern, if we turn it outwards, then the null of the dipole becomes directed towards the opposite (ipsilateral) ear, thus naturally contributing to the suppression of the acoustic cross-talk between speakers. This effect these days is usually achieved using DSP by injecting a suppressing signal into the opposite speaker (see a great post by Archimago and STC on this topic). However, it would be nice if the opposite ear would be just naturally blocked hearing the sound from the speaker.

I've estimated the angle between the full range driver and the opposite ear to be approximately 75°, thus the suppression is not maximal. However, it should still add extra -5 to -10 dB attenuation to head shadowing, depending on the frequency. I plan to measure the exact attenuation profile some time later. Another feature of setting the speakers this way is that the back of the speaker gets farther from the back wall, at about the recommended minimum of 1 meter.

Ideas for Tuning

Since the original LXmini tuning was aimed to achieve flat response on-axis (see the design notes), my unusual speaker arrangement required a dedicated tuning. I started looking around for ideas on to achieve close to ideal response in the time domain.

The author of Acourate Dr. Brüggemann holds a very strong position on using linear phase crossovers. Acourate can generate various kinds of crossovers, both in minimum phase and linear phase versions. Also, there are some tools (including a new one added in the recent Version 3) which are intended to bring each driver as close as possible to the corresponding band pass filter of the crossover, both for amplitude and the phase. Together with proper time alignment of the sound from each driver at the listening position, this allows to achieve "ideal" summing of the acoustic crossover components, yielding the perfect Dirac impulse response for the speaker as a whole.

Though, my initial concerns were about the pre- and post-ringing behavior of the linear phase filters. As we know, they are symmetric around the center, and the pre-ringing may potentially exceed the thresholds of masking. When the components of a linear phase crossover sum up as intended—with their peaks coinciding, the pre- and post-ringing components from each crossover band cancel each other. However, if there are time shifts—even as small as a fraction of a millisecond—this does not happen. The example below is for a two-band linear phase Neville Thiele crossover:

This is how the summed impulse response looks like on the logarithmic scale when the components are properly time aligned, and also for 0.23 ms and 0.5 ms time of arrival difference:

The red vertical line is the ideal IR which occurs in the ideal time alignment case, and on the right are the IRs when one of the crossover components is shifted. Recall that these delays correspond to a distance difference of just about 7.88 cm and 17 cm—that's comparable to the size of the human head.

When I started discussing this topic on the Acourate forum, one of the members has pointed me out to the white paper by B. Putzeys and E. Grimm on their ideas behind the DSP-based implementation of the professional Grimm Audio LS1 speaker (which costs quite a lot!). The authors used a minimum phase Linkwitz-Riley filter, but compensated for its phase deviations using an inverse all-pass filter. If we think about this approach, it effectively also yields a linear phase filter. In fact, when crossover components get time shifted, the combination of the crossover plus reverse all-pass filter also exhibits pre-ringing, although its level is a bit lower, and what's more important, the duration is shorter:

(Note that the red IR is not an ideal Dirac pulse because although the phase response of the all-pass filter I created is close to the phase response of LR4, it is not exactly the same). However, these improvements over the ringing of the Neville Thiele crossover are just due to the fact that the LR4 crossover has more relaxed slopes to start with:

Thus, instead of compensating for phase deviations of a minimum phase crossover, which can be quite severe for high order crossovers, we can as well just start with linear phase crossovers as they are much easier to work with. For example, I wanted to use an asymmetric shape in which the higher frequencies driver has more relaxed slope compared to the lower frequencies driver. This is beneficial for the LXmini design because the directional pattern of the full range driver yields more precise spatial cues than the omnidirectional woofer. This approach also helps for the pair of the woofer and the subwoofer because I only have one, so I would like to experience a stereo bass as much as possible. The asymmetric shape of crossover slopes at first yields a non-flat summed frequency response, however this is easy to compensate (again, with a linear phase filter), thanks to the fact that the phase shift, being always equal to zero, does not affect the summing of amplitudes of the crossover components.

Another interesting observation. The fact that I'm performing the tuning in a real room, not in an anechoic chamber, implies that I need to use windowing of the measured frequency response. As I have realized after brief experiments, the frequency dependent windowing (FDW) partially suppresses pre- and post-ringing of linear phase filters. However, as a result it also changes the shape of its frequency response by making it less steep. In my opinion, this is a good trade-off. In the next section I will show the shapes and IRs of the linear phase crossovers I have ended up with.

Crossover Preparation Details

The aforementioned Grimm Audio LS1 white paper has a suggestion on "ideal" crossover points. From the psychoacoustics data, the authors state that the directional pattern of the frequency response should be used down to 300 Hz. The original LXmini has its acoustic crossover point closer to 790 Hz, however it uses a 2nd order LR crossover thus the output from the full range driver actually goes quite low in frequency range. So the first thing I've done was to measure the raw response of the full range driver. Here it is together with an FDW processed version:

Looking at the natural roll-off of the driver I have chosen 366 Hz as the crossover point. At the high frequency end, the full range driver due to its relatively large size starts working in a breakup mode, thus losing efficiency. Plus, I'm not listening to it on-axis and that creates a natural roll-off at high frequencies. However, that's not a problem. Since the speakers are located quite close to my ears, there is no need to try to make the frequency response to be ruler flat at the high frequency end because that makes the sound too harsh. So I generated a LR2 linear phase crossover for 11 kHz and used its low frequency part to taper the response of the driver on the right side. This is how the final crossover component looks like, overlaid with the raw windowed response:

Similarly, for the woofer driver I have chosen 46 Hz as the crossover point. The slope on the left side is LR4, however on the right side I used Neville Thiele 1st order crossover as it has a sharp, "brick wall" slope. I passed it through the same frequency dependent window that I use for the in-room measurements, and this has made the shape of the slope more "relaxed". Below for comparison are the original NT1 slope overlaid with a windowed one:

There is not much difference in the time domain though:

And this is how the designed crossover component looks on top of the raw driver response:

The subwoofer was a bit interesting. Choosing the crossover did not require any thinking because the crossover point was already set from the woofer driver, and the type on the right side is also Neville Thiele 1st order. However, since it's an active subwoofer with servo (Rythmic F12G), it has some settings of its own. I experimented with different damping settings and low-end extension, and found that low damping and the extension down to 14 Hz creates a time domain response which looks close to the IR of the crossover if I invert its polarity. This is how these IRs look like overlapped (the polarity IR of the subwoofer is inverted):

And this is the final look on the crossover components that sum up into a flat frequency response (with the high frequency range trimmed down) and a zero phase response:

Visually this crossover reminds of the Bessel low-pass filter (used in the "RBessel" crossover type in Acourate) of a high order, however mine uses even steeper slopes on rights sides.

Driver Tuning Process

My tuning process has two major stages: the first to bring each driver as close as possible to the behavior of the corresponding band pass filter of the crossover (that also includes fixing the phase behavior), and the second stage is to combine these drivers into a proper acoustic crossover.

I was doing all the measurements from the single position—the listening position. Although it is possible to linearize drivers in the near field, I did not use this approach due to two reasons. First, the full range driver works as a dipole, and they must be measured from some distance. Second, since I was interested in the performance of the crossover at the listening position, this was the natural position to use for driver linearization as well.

For the driver linearization I used the "Room Macros" of Acourate, setting the "Target Curve" to be the desired crossover band pass behavior. Obviously, I used the same window for the FDW of the measured driver response as the one I used to process crossover parts during the preparation stage. I did not use "Psychoacoustic" smoothing at the driver linearization stage, instead I used more technical "1/12 Octave" smoothing. I was also limiting the amplitude correction to avoid creating a boost at the frequency bands where the response of the driver was naturally decaying below the intended crossover suppression level. As an example, below is the correction filter for the woofer driver, overlaid with the target:

After the correction filter has been generated by "Room Macro 4" and the result has been evaluated via a test convolution, I re-measure the driver with the filter applied. Then I check the phase behavior. Since the correction process of Acourate tries to bring the driver to the minimum phase behavior, it will leave out phase deviations that present in the minimum phase impulse response of the target curve. Note that when equalizing an entire full-range speaker to a mostly flat target curve, these phase deviations will end up outside the hearing range. However, for a driver, since it has a limited frequency range the phase deviations will typically end up near crossover frequencies, and this fact will make proper time alignment more problematic. For example, this is the phase response of the corrected woofer:

We can see that the phase gradually deviates from zero and "flips" over the 180° angle at 47 Hz. I treated these phase deviations using the same approach as in the Grimm Audio white paper, which are in essence the same approach as the one described by Dr. Brüggemann in his post "Time alignment of drivers in active multiway speaker systems" on the Acourate forum. That is, we need to "guess" an all-pass filter which has a similar shape as the form of the phase deviation of the speaker, and then put its reversal into the correction chain (that effectively means, we need to convolve the reverse all-pass filter with our existing filter). For example, for the woofer the corrected phase behavior looks like this:

Obviously, since it's an all-pass filter, the amplitude remains the same. There shouldn't be more than 1 or 2 all-pass corrections needed. Only the area within the driver working range must be corrected, and we must look at the windowed response to avoid correcting for the effects from reflections that very dependent on the mutual distances between the driver, the reflecting surface, and the measurement point.

Now with each driver being brought as close as possible to the desired crossover band-pass filter behavior, we need to "assemble" them into a speaker by aligning their levels and times of arrival. To do that, first I measured the speaker as is, and did a rough correction of driver levels. Then I used the the sine wave convolution approach first for aligning the full range driver with the woofer, and then the woofer with the subwoofer. At low frequencies, the convolved sines may initially be considerably shifted from each other. Also, the low frequency filter may be developing a bit slowly and have irregular sine amplitudes in the beginning. To ensure that the resulting time alignment of the drivers is proper, I had applied the same sine wave convolution step to the crossover components and used the produced overlapping picture as a reference. For example, this is how the sine waves of my crossover look like for the 46 Hz point:

And this is how the results of sine wave convolution was looking initially for the woofer and the subwoofer:

Compared to the image before, it becomes obvious that the subwoofer (the blue curve) needs to be shifted ahead in time of the woofer for a proper alignment.

After applying gains and delays to the driver filters, I have made another measurement and double-checked that the sine convolution on the measured IRs produces the expected result.

Target Curve Adjustments

Life would be too easy if we could just take the summed crossover response and use it as a target for the overall speaker tuning. I tried that first and was not impressed with how it sounded. The first problem was that the vertical positions of virtual sources were too high while I would prefer having them at the eye (or ear) level. The second problem was overall lack of "weight" in the sound. The target curve was definitely asking for some adjustments.

The first problem is a consequence of the fact that any virtual source, for example a rendering of the singer's voice, which is appearing to be in front of the listener, is created by a pair of stereo speakers that are physically located on the sides. In my case, the speakers are placed even wider than the conventional "stereo triangle." As S. Linkwitz explains in the paper "Hearing Spatial Detail in Stereo Recordings", if we consider the sound pressure on a very crude approximation of a human head—a sphere—we will find that physical sources located in front of the sphere and on the sides of it create very different sound pressure distributions across the frequency range. A more precise description of this distribution is of course the HRTF. Since the two audio streams that represent the virtual central source arrive from the sides, they do not have a proper frequency profile of a center source, and as a result, the hearing system places this virtual source higher. A simple solution used by Linkwitz is to apply a shelving filter which compensates for this effect.

And the second problem—overall lack of weight, or a bass-shy presentation from a flat target curve can be explained by the interaction with the room. Running a bit ahead, below are comparisons of the speaker quasi-anechoic response (FDW windowed) vs. the steady state room response, obtained from the same measurement position by taking an RTA measurement of pink noise playing continuously:

We can see that the room "eats" the bass but amplifies high frequencies. That's why adding more bass to direct sound as well as tapering the high end seem to make sense. So after some experiments with well recorded tracks, I have chosen the following target curve:

On this graph it is compared to the initial "tapered flat" crossover curve.

The Final Correction and Measurements

The final step in the tuning process is to apply "Room Macros" to the entire speaker using the target I have created. This time I used the "Psychoacoustic" smoothing. This step fixes any remaining discrepancies in the levels of the drivers. Below is the FDW response of the speakers after applying the correction, overlaid with the target:

And below is the phase response of the speaker—as we can see it is indeed close to the "zero phase" (this is also the windowed version which excludes phase deviations due to reflections):

I checked the group delays by using the "ICPA" function of Acourate ("Room Macro 6"), and found only one very high-Q group delay deviation, not worth correcting.

The step responses of the speakers look good:

Note that these are responses without any windowing, so they do not look fully identical due to reflections and asymmetry of the room. It can be clearly seen from the Energy Time Curve (ETC) graphs produced by RoomEQ Wizard (REW):

Since this is a small room, strong reflections start appearing quite early, but it's hard to do anything about that because there are windows behind my listening position—I can't put any acoustic treatment there.

Also using REW, I checked the distortion measurement and observed the known issue with Seas FU10RB drivers of the raised 2nd harmonic distortion level between 1 and 2 kHz, also noted in the "Erin's Audio Corner" review when he was measuring the LXminis:

Also there is a bit more distortion between 300–500 Hz probably because the full range driver is being pushed harder. The distortion in the right speaker around 100 Hz due to an interaction with a room mode—if I move it to a different position, this peak disappears. And I'm not sure why each harmonic trace ends up with a funny upwards curve—this must be a measurement artifact.

The resonances from room modes can be seen on the spectrogram:

I decided to order some more bass traps, will see if they actually help to reduce the effects from room modes.

Does Non-Ideal Summing Induce More Pre-ringing?

Now let's try to get back to one question from the beginning of the post. Recall the simulations of non-ideal summing of the acoustical linear phase crossover and the associated pre- and post-ringing. I decided to check what happens in reality. For that, I have moved the measurement mic by 17 cm to the right and re-did measurement. Below are the resulting step responses. This one is for the left speaker, overlaid with the original (where the crossover components are time aligned):

Note that since the causal part of the IR is dominated by the room reflections it is not possible to judge the effect on post-ringing. As for the pre-ringing, it seems that it is actually lower in the IR recorded from the microphone position shifted off the perfect alignment.

And this is the right speaker:

We can see that for this one there is indeed a bit more pre-ringing. Evidently, the real acoustic behavior of speakers is much more complicated than these ideal models. And for a proper evaluation of the crossover behavior off-axis an anechoic chamber should be used.

Does this all matter? Maybe not so much, after all. Anyway, there is no ideal solution when we are trying to combine a full-range speaker from several band-limited drivers. If we are striving to get a perfect solution, we actually need to avoid using crossovers at all, by using a single driver, for example, an electrostatic panel or the Manger Transducer. The Manger seems to me like a variation on a coaxial driver, however due to use of a single, specially engineered diaphragm it probably does not suffer from the Doppler effect. Anyway, that's a different price level.

To Be Continued

Of course, it's interesting to discuss how this setup sounds like, however this post has already ended up being quite long. I will write about listening impressions and other things separately.