This question is of some interest for the author of these pages because in the General Lorentz Ether proposed here \(\Lambda < 0\) would be slightly preferable.
While these advantages are not decisive, it would be, therefore, clearly preferable if it would be possible to have \(\Lambda < 0\).
Now, there seems to be not much hope, given that there is strong evidence in favor of \(\Lambda > 0\). But is that evidence really that strong? In fact, it depends essentially on a single effect - the measurement of the redshifts of a special type of supernovae, namely SN 1a.
This type of supernova that in binary systems in which one of the stars is a white dwarf, where this white dwarf gradually accretes mass from the binary companion, until a critical mass is reached. The supernova explosion starts when this critical mass has been reached. That means, we know, in this particular case, very well the mass of the star which has caused this explosion, and, as a consequence, also the strength of the explosion. This allows,by comparison with the visible strength of the explosion, to identify the distance of the supernova. An object with such a property that we can identify the real strength of some light source, and, by comparison with the visible strength, can compute the distance to the light source, is named "standard candle".
So, for large distances, the SN 1a supernovae are used as such a standard candle. We think we can compute with sufficient accuracy the distance to the explosion. But we can also measure the redshift. As a consequence, we have a good way to measure how the redshift depends on the distance.
So, what has been done was to measure, for all supernovae of type SN 1a, the distance and the redshift. And the result was a curve which showed an acceleration of the expansion.
This has been questioned by a by a team of astronomers at Yonsei University (Seoul, South Korea). They found a significant correlation between SN luminosity and stellar population age of the galaxies hosting that SN at a 99.5 percent confidence level. Probably, what really correlates with the SN luminority will be the age of the progenitor or its companion, but these have been already destroyed when the SN happens, and, moreover, even if not they would be too far away to be identified. But the average age of the stars of the hosting galaxy appeared to be a sufficiently good guess, at least good enough to identify a nontrivial dependence of the SN luminosity and the age.
And this dependence is important: Far away galaxies tend to be younger, simply because there was less time after the big bang for them to evolve. And there are also data about this. So, we can combine them, and see how the SN1a redshift curve would look like if there would be \(\Lambda=0\) giving the following red line:
Figure 1. Luminosity evolution mimicking dark energy in supernova (SN) cosmology. The Hubble residual is the difference in SN luminosity with respect to the cosmological model without dark energy (the black dotted line). The cyan circles are the binned SN data from Betoule et al. (2014). The red line is the evolution curve based on our age dating of early-type host galaxies. The comparison of our evolution curve with SN data shows that the luminosity evolution can mimic Hubble residuals used in the discovery and inference of the dark energy (the black solid line). Credit: Yonsei University
This would be very problematic for the explanation of the acceleration using \(\Lambda > 0\), for the simple reason that this term would be completely isotropic.
But this is what has been found. Once a purely anisotropic effect can be explained with some local flows, but not with the homogeneous expansion which would be caused by \(\Lambda > 0\), only the isotropic part could be explained by \(\Lambda > 0\), so that an observed anisotropy would clearly decrease the evidence for \(\Lambda > 0\).
Moreover, there is also an important theoretical alternative. It has been proposed by David Wiltshire.
The point is that even if the universe was initially very homogeneous, it became more and more inhomogeneous with time. And Wiltshire has found that this inhomogeneity is now strong enough that it leads to relativistic effects: In the large voids, the density is much smaller, thus, locally it expands faster. And this difference increases with time. Instead, in the regions where the mass concentrates, at the borders of the voids, the expansion will be slower.
Now we live not inside a void, but on its border, in a region which much higher than everage density. Once all the light rays reaching us goes also through those increasing voids, light needs longer than one would expect based on our local situation. If interpreted in a homogeneous ansatz (as the FLRW metric used in \(\Lambda\)CDM), the expansion rate looks larger in comparison with our local distances.
And once this starts to become relevant only after the inhomogeneities become large enough, and the importance of this effect increases with time, because the inhomogeneity increases too, this looks like an accelerated expansion.
According to Wiltshire's computations, this effect is great enough to explain the seeming acceleration completely, without \(\Lambda > 0\).
There have been comparisons of Wiltshire's timescape universe with the standard \(\Lambda\)CDM, see, for example, these papers.
Now, if we don't question those computations themselves, then this fact alone has fatal consequences for the \(\Lambda\)CDM model. Namely, in the \(\Lambda\)CDM model all computations are based on the homogeneous FLRW ansatz. But if Wiltshire's computations are correct, then the inhomogeneities have consequences which are too large to be simply ignored, and a homogeneous ansatz is, therefore, no longer acceptable as an approximation after the inhomogeneities became too large. And this argument holds independent of the viability of Wiltshire's timescape cosmology itself -- even if it would not be viable without the \(\Lambda > 0\) term, all that matters is that the value of \(\Lambda\) computed would be different from the one computed with the homogeneous FLRW ansatz, with a large enough difference so that it cannot be simply ignored.
Unfortunately, there are a lot of mathematical subtleties: The problem is that one would have to consider averages. But it is not that clear how to define them, and what will be the evolution equations for them.
There have been, for example, some discussion about how well can the evolution of the universe be approximated by an FLRW metric, see Green & Wald 2014 and Buchert et al 2015.