You are on page 1of 11

The net effect of clouds on Earths radiation budget

Essentially clouds affect climate in two ways. They reflect solar radiation (mostly) back into space, thereby lowering the temperatures
on Earth. Secondly they also reflect terrestrial infrared radiation (mostly) back to the surface, working just like a GHG. Of course
clouds are not a gas and their net effect is said to be cooling. For those reasons they badly fail the both criteria of a GHG, yet they are
usually being named and treated is if they were. The terminology here is not so important. We could just as well name them a
“greenhouse factor” or so.
What is important however, is the link of both roles that clouds have. There is no fundamental difference between reflecting radiation
coming from the sun, or from Earth. Clouds are doing that pretty well, both in the range of visible light as infrared. Btw. a large
fraction of reflected solar radiation is also in the infrared spectrum. Also, as radiation is unaffected by Earths gravity, it makes
virtually no difference, whether radiation is reflected upward or downward.

Next to the obvious reflective role, clouds will also absorb and emit radiation. I will openly admit, that I never paid much attention to
it, as these are ubiquitous processes. Even within a simple piece of wood, molecules will absorb and emit radiation from and to the
neighbouring molecules (it is not just happening at the surface!). As this energy is going nowhere, nobody cares, and no one would try
to build complicated “climate models” for a piece of wood.
Sure, things may be more complicated when talking about global climate. Yet the basic understanding of radiation as an ubiquitous
process makes it a somewhat unattractive subject in a closed system. Then again, Earth is not a closed system, and the concept of a
GHE does appear quite reasonable. Short wave radiation is “injected” into a semi-transparent system, which then retains long wave
radiation.
Now this concept of a semi-transparent atmosphere would work perfectly and doubtlessly, if it was based on reflections. The concept
of absorption and emission is something very different. I am pretty sure there might a simple logical answer to the question if it can
be, or not. But for the time being I see it as a mental jawbreaker, where I can not get to a sober conclusion.
However there is one thing which I am completely certain of. Even in the absence of a GHE, given that sunlight is also warming the
atmosphere, not just the surface, we would have “back radiation”. So you must not consider “back radiation” as evidence for a GHE.
That would be a logical mistake.

That difference between absorption and emission on the one side, and reflection on the other is especially important when we consider
clouds. A simple look at the consensus GH-model will show why that is so, and how confusion is not a stranger to “official” climate
science.

It is this schizophrenic behavior that makes clouds so vexing to researchers who are trying to predict the course of climate change. ...
Furthermore, the air temperature, which is affected by clouds, in turn affects cloud formation. It's a circular relationship that makes
climate research all the more difficult.
"Clouds remain one of the largest uncertainties in the climate system's response to temperature changes," laments Bruce Wielicki, a
scientist at NASA's Langley Research Center. "We need more data to understand how real clouds behave."
https://science.nasa.gov/science-news/science-at-nasa/2002/22apr_ceres

Not just clouds get attributed with psychotic conditions, they are also accounted for accordingly in Earths radiation budget.
https://www.weather.gov/jetstream/energy
According to this, 23% of solar radiation is reflected back to space by clouds. 23% of a total of 342W/m2 are 79W/m2. This is an
interesting figure, because I came to the exact same number by analyzing the optical brightness of Earth, both with and without cloud
cover. Of course this might differ somewhat with the non visible solar infrared, but anyhow.
Also clouds would then account for 23/30 = 77% of the total albedo, which is a huge figure. You may find many sources claiming that
clouds make up for over 2/3s of the albedo, which seems like common wisdom, but 77% are obviously even higher than that.

Now there is something completely missing in this graph, and that is the other side. It does not give a figure for the amount of
terrestrial infrared being reflected by clouds back to the surface. All we find is a massive arrow indicating 98% “back radiation”
“emitted by atmosphere”. We may assume that is meant to include the missing cloud part.
However it makes a huge difference, whether radiation is (re-)emitted, or simply reflected. Also clouds in this image truly behave
“schizophrenic”, as they are reflecting solar radiation as clouds do, but then turn into a GHG which emit “back radiation”, along with
other GHGs. That schizophrenia however is not originating in the clouds.

Next to 79W/m2 upward reflection, there are also 9% upward emissions, which add another 342*0.09 = 31W/m2 radiation loss, due
to clouds. Combined, clouds would have a negative gross cooling effect of 110W/m2. That is on its own, and if you will, a “gross”
statement.

Also we are left wondering what magnitude that downward radiation would have. 32%, just as much as goes upward? More? Less?
The graph will not tell. The IPCC however will. “Clouds increase the global reflection of solar radiation from 15% to 30%, reducing
the amount of solar radiation absorbed by the Earth by about 44 W/m². This cooling is offset somewhat by the greenhouse effect of
clouds which reduces the outgoing longwave radiation by about 31 W/m². Thus the net cloud forcing of the radiation budget is a loss
of about 13 W/m²”1

So the albedo effect has just shrunk from 79W/m2 to only 44, almost the half of which. And the GHE would merely be 31W/m2, or
9% (=31/342) in the graph above. I should note that there are similar “consensus” positions on the issue naming even lower numbers,
somewhere in the range of 24-31W/m2. That of course would be 7-9% in the named context. The upward emissions however are
completely gone!? Why would clouds only emit in one direction, that is downward? Or did the IPCC forget about it for
..convenience?

Maybe that is the reason, why the cloud GHE is not named in the graph. It would look quite strange, if upward reflection was 23%
plus 9% upward emissions, and downward reflection/emission only 9%, or even less. Naming a substantially higher number however
could jeopardize the whole GHE concept. Also there is an implicit understanding, that both will be interconnected. On the other hand,
if you want to argue low downward reflections/emissions, you will not want to name massive upward reflection, which may explain
the contradicting IPCC figures.

Clearly these are two positions within the consensus model, which impossibly go along. You either have 110 upward and 30
downward, or something like 44 up and 31down. Sure, the downward emissions are compatible, but there is no way to get around the
dissent between 110 and 44.
The 44W/m2 figure is wrong for many obvious reasons. First of all, the albedo effect of clouds alone is much bigger than that, with
70W/m2 as a minimum. Secondly when talking about the net effect, you must not skip the upward emissions part, which would than
likely add another 30W/m2. After all there is little sense in arguing, that upward emissions were substantially lower than downward
emissions. It is the same story I already told. As emissions go everywhere, they neutralize themselves.

On the other side, by fooling around with emissions and accounting them only selectively, you could argue anything. These figures
have doubtlessly been manufactured to argue a small downward, or “GHE” of clouds, which directly competes with other GHGs in
that role. The bigger that factor is, the less remains for the rest. Comparing both versions of the consensus model side by side, might
illustrate the issue (note that any downward radiation will be a competing part of the GHE):

REFLECTION EMISSION REFLECTION EMISSION

79W/m2 31W/m2 44W/m2 0W/m2

??? ??? ??? 31W/m2

The 110 W/m2 figure may well be true, but is a complete mess if you compare it to the downward effect. With -80W/m2 (110-30) as a
net cooling effect of clouds, we are doubtlessly in an area of science fiction. And it gets even worse if we try think to it through. As
high clouds are meant to have a positive effect, the average (!!) cooling of 80W/m2 would be all attributed to low clouds. If
furthermore we assumed an average coverage with low clouds of 35%, that would translate into 80/0.35 = 228W/m2 constant energy
loss with an overcast sky. This would be completely absurd.

1
https://en.wikipedia.org/wiki/Cloud_forcing with reference to the 1990 IPCC report
https://www.ipcc.ch/ipccreports/far/wg_I/ipcc_far_wg_I_full_report.pdf (page 79)
Now one could theorize these issues back and forth and criticize those “official” positions. Or we can look at real, empiric data and
see if we can add some clarity to the matter. The idea is this: if clouds had a cooling effect over all, then they will be associated with
lower temperatures as compared to clear skies. That is a hypothesis we could put to the test, if we only knew how.

First off all we require data, obviously on both, temperature and sky (or cloud) condition. The NCEI (or NCDC) is providing such
data for the USA. Of course, these data are anything but perfect. There are over 2.500 stations reporting (at least) hourly
measurements, but not all of which also provide consistent information on cloud condition. Many of these seem to be located at
airports, and the reporting only covers clouds up to an altitude of only 12.000ft, possibly because higher altitudes would be irrelevant
for flight controls around the airport. That is an important aspect, as it is being argued that high clouds would provide a stronger GHE
than low clouds. As we can not cover high altitude clouds, whatever the outcome, it should be overestimating the (net) cooling effect
of clouds, if there is any.

The cloudiness is essentially described by only five parameters, which are CLR, FEW, SCT, BKN and OVC (which translate into
clear (sky), few clouds, scattered clouds, broken sky and overcast). There are yet combinations of these conditions, where you may
have for instance few clouds on a lower level, and an overcast situation above that. However you would (or should) never see the
opposite, like overcast at a low level, and a few clouds at a higher level. If you wrap your mind around the issues which there are, this
makes a lot of sense. With visible judgment you can not look beyond a cloudy sky. And in the context of aviation, you want to provide
information as good as possible on the sight which is around the airport and the runways. For our purpose what matters is the
maximum level of cloudiness, ignoring the altitude. Also there are a very few exceptions to those rules, which look like human error
and indicate all these data might be taken “manually”, rather than automated.

I started with a basic approach, which generally seems opportune, but was so specifically in this case, as I was not working with a
database. Rather I was using a c programme to extract the data directly from .txt files including all the records (over 100 Mio) from
the named stations over the years 2015 to 2017. In the evolution of the code you would naturally go from simple to more
sophisticated.

So first I plotted these five levels of cloudiness vs. the corresponding average measured temperatures.

Condition Records Temp. F Temp. C


CLR 50,791,658 56.74 13.75
FEW 3,755,719 63.83 17.68
SCT 8,440,561 64.36 17.98
BKN 10,016,310 60.52 15.84
OVC 29,900,161 48.27 9.04

As one might expect, given the assumed predominant cooling effect of clouds, overcast scenarios are by far the coldest. In second
place however come clear skies and that is quite surprising, just as much as average cloudiness is associated with the highest
temperatures.
Further analysis suggested that CLR and OVC scenarios tend to be more common during winter time, which might be a very simple
explanation to this strange pattern (though is ultimately not as we will see). In order to filter this bias I added another register for each
month of the year. Now the programme would shape a 2-dimonsional array, with the 5 cloud conditions and 12 months, and would
then calculate the average for each of which. Only then the average of the 12 months would determine the average of the particular
cloud condition.

Condition Temp. C
CLR 13.22
FEW 15.74
SCT 15.56
BKN 14.94
OVC 10.91

Filtering by season flattens the curve substantially. Now the difference between FEW and OVC shrinks to less than 5°C, as compared
to 9°C before. Except for OVC all other scenarios drop in temperature, which is also due to the large share of OVC scenarios (29%) in
the unweighted sample. As the result shows, a seasonal adjustment is absolutely mandatory.

Next there will likely be another bias due to location. For instance there are a lot of stations located in hot and dry desert areas. So
they would be typically reporting both clear skies and high temperatures. In order to filter this bias, I used a 3-dimensional array with
5 cloud conditions, 12 months for each of the reporting stations. Technically I would add up all the temperatures and run a second
array of the same dimensions that counts the number of samples. Then you just divide the sum of temperatures through the counter,
and only then you average the averages.

Anyhow, the outcome is highly interesting. Now we still have a “curve”, but it is almost symmetric. CLR and OVC scenarios are as
before the coldest, but with almost identical temperatures. In fact they are so close by, that the difference might even be within the
statistical margin of error.

Condition Temp. C
CLR 12.67
FEW 14.15
SCT 14.29
BKN 13.91
OVC 12.30

Only filtering two factors, season and location, has dramatically changed the outcome. Now there is no overall tendency to lower
temperatures with clouds, something that should exist however, if they had a predominant cooling effect. Why does intermediate
cloudiness (FEW, SCT, BKN) go along with the highest temperatures?
One might assume that these are at least partially due to nice weather clouds, where thermal updraft is moving moist air into higher
altitudes. Because of the adiabatic lapse rate, these air masses will cool down (while yet being relatively warmer than surrounding air)
and cause the vapour do condensate. So, partial cloudiness may well be correlated with warm weather, without causing it.

Yet seasonal influences make much of a difference. While CLR situations are moderately warm during summer and OVCs by far the
coldest, it is the opposite during winter. This makes a lot of sense, when you consider that during the cold and low sunlight season a
lot of “heat” is “imported” from warmer regions and the ocean respectively. Under such circumstances clouds will provide a relatively
stronger GHE as related the amount of solar radiation blocked.

Another interesting detail is that the intersection of the CLR and OVC lines happens at much lower temperatures in spring (or later
winter) as in autumn. This makes perfect sense, since temperatures will be lagging behind solar intensity by about one month. In
spring we have relatively strong solar radiation, but low surface temperatures. Clouds will accordingly reflect relatively more sun
light than terrestrial infrared. Accordingly OVC will underperform CLR. In autumn it is exactly the opposite way. It is a phenomenon
that should exist in theory, but it is nice to see it confirmed empirically. Also this may be a modest indication, that this analysis is not
totally unreasonable.

Logically this brought up the question, if the picture would not change substantially if we moved southward. Most US stations are
located between 30 and 49° latitude. Most solar radiation however is absorbed in tropical regions, at less than 30° latitude.
Regrettably I could not move US weather stations southward, or find any comparable data in other, more tropic countries. But what I
could do was to select a more south bound sample, based on stations located below 35° latitude. This would yet not be to able
represent a tropical climate, but it should be possible determine a trend, if there is one.

Condition Temp. C
CLR 19.42
FEW 21.52
SCT 21.73
BKN 21.36
OVC 19.40

Is there any trend? Not at all! Things stay very much the same. In warmer regions, closer to the equator, solar radiation will be more
intense, and accordingly clouds will block more solar radiation. But as temperatures are generally higher, the local “GHE” of clouds
will be stronger too. They will block, or rather reflect more infrared back to the surface in their presence.
Sure, as named before, I would have expected the opposite. Relative to solar irradiance temperatures will be somewhat lower in the
tropics relative to peripheral regions, as illustrated in the picture below. So why does it not materialize? I do not know! But there are
only two thinkable answers.
Either we simply have a bad sample, or there is a systemic factor which is simply heretical. What if clouds are actually warming as a
net effect? Then, with the increase of radial exchange in the tropics, that net effect would become rather stronger than weaker and
could well offset the effect of heat transfers.
Yet I have attempted two more iterations. A sample of stations <30° latitude..

Condition Temp. C
CLR 22.46
FEW 24.38
SCT 24.59
BKN 24.15
OVC 22.34

..and another one only including stations within 25° latitude south and north. This is really a tiny sample, including just 26 stations
(there was a total of 67 stations, but only those 26 were also reporting cloud conditions). These would be located in Key West and
Hawaii, next to some military bases like Guam, Diego Garcia and so on..

Condition Temp. C
CLR 24.79
FEW 25.51
SCT 25.93
BKN 25.97
OVC 24.96

In this last sample, which is to be taken with some caution, OVC is finally warmer than CLR. Indeed it is a trend we can see, as we
restrict the sample ever more to tropic and warmer regions. The curve rather tilts to the left, than the right, which comes as a surprise.
Even though the quantity and thus the quality of the data base is diminishing, there is one definite conclusion. There is not the
slightest indication, that clouds would have more of a cooling effect close to the equator. Rather the data suggest the opposite.

There are two biases that could be sorted out by simple arithmetics. But there are a couple more of which, which I can not quantify at
all, or not the way I did before. Yet that does not mean they would not exist, or that they would matter. In fact they may be all
important.

1. There is a diurnal cloud cycle. “Peak clouds” happens in the early morning, the minimum happens around sunset. Despite other
patterns, which do exist, overall cloudiness increases in the absence of the sun, and is diminishing, when the sun shines. And while
minimum cloudiness does not quite coincide with daily peak temperatures, peak cloudiness is a perfect match to minimum
temperatures.
For this reason clouds will be associated with lower temperatures, even though it is “not their fault”, or should I say not a result of
their radial impact. It is another factor that should be filtered out, if only I knew how and could quantify it accordingly.

Just when I wrote those lines it came to my mind, that after all it could be possible. So I had to quit writing, and go back to
programming. I tried some things and ultimately had another Eureka moment. Let us go through it step by step.

The first thing was to determine diurnal cloud patterns, not as an aggregate index, but in each category. Again the outcome was a way
different than what I expected.

Now this is quite specific, isn’t it? Both, CLRs and OVCs diminish during day time, while intermediate cloudiness increases on their
expense. I have no clue why this is so, except for the mentioned thermal effects, but the data do not lie. One could potentially argue,
that possibly meteorologists might be having problems differentiating cloud conditions during night time and thus preferring either
CLR or OVC as an indication. Who knows! But yet there is plenty of reason to believe that is not so. Most of all however, this is the
perfect explanation why intermediate cloudiness is associated with the highest temperatures. And if we could sort out this daily
pattern, then the curve might ultimately flatten.

The next step was to sort out this bias. The method would be determining the global average (for each sky condition), then
determining an average for every single hour of the day for each condition, averaging those results, and calculating the difference of
the two. I am not sure if it is the perfect approach, as more structural factors come into play, but it certainly goes in the right direction.
Here is the adjustment..

Condition adjustment in C
CLR 0.17
FEW -0.62
SCT -1.01
BKN -0.37
OVC 0.02

The adjustment strikes exactly where it should. The intermediate scenarios will get reduced, the extreme ones increased. Also it will
flatten the curve and make things more explicable. Yet I am not sure, if I got the theoretic part all right, the outcome still looks a way
odd. Anyhow, it seems reasonable and the result than it was before.

Condition Temp. C Correction F. in C Result


CLR 12.67 0.17 12.84
FEW 14.15 -0.62 13.52
SCT 14.29 -1.01 13.28
BKN 13.91 -0.37 13.54
OVC 12.30 0.02 12.32

With that adjustment the curve is almost gone. After allowing for season, location and diurnal patterns, all cloud conditions are within
a range of about one degree Celsius, which so far suggest a neutral radial net effect of clouds.

2. Precipitation and bad weather


This is to be taken with a grain of salt. First I am not completely sure, if there are substantial amounts of rain falling from clear skies.
Rather this reflects the 12.000ft issue. These are clear skies up 12.000ft, but above that, anything goes. So the amount of rain gives
some indication on the (average) occurrence of clouds above that ceiling. The other problem is, that in many occasions there will be
more than just one hourly reporting, with the amount of precipitation per hour. So if you simply add up these figures, you will get
more precipitation than actually happened.
As we are more interested in the basic correlation between precipitation and cloud condition, this may still serve as a simple indicator
to affirm what common sense tells us anyhow. Rain falls from clouds.

The important part is this. Cold rain will lower temperatures on the surface, first because it falls from higher altitudes where it is much
colder. Secondly, once it has fallen, it will provide a chilling effect to the surface due to evaporation. And finally there will be a certain
association with thunderstorms and winds, which also will drop temperatures sharply.

All these factors will provide lower temperatures, but they are not (directly) related to the radial balance of clouds themselves. So they
make OVCs (and to a lesser degree BKN and SCT) colder than they would be otherwise. I have not figured out yet how to reasonably
weight these factors.
But even if we give it just a very small significance, this will easily be enough to give our basic chart a slight tilt to the left. Then,
already, clear skies will likely be colder than cloudy skies.

3. Pressure systems and the adiabatic lapse rate. Again I see no way how to quantify this factor, but it does exist. At sea level a 1%
change in air pressure means almost 1°C change in temperature. This is due to the property of gases to heat up with pressure (or the
opposite), also known as the adiabatic lapse rate. Also this is the basic reason why it gets colder the higher one gets up a mountain.
So when air pressure changes due to the influx of weather systems, they will have an effect on the surface temperature. Potentially this
effect could be quite large, like 3 degree centigrade, ceteris paribus. But since the process of changing air pressure takes time and
radial exchange is taking place all the way long, and since there is a certain amount of inertia in soil temperatures (mind this is a small
factor on land but a huge factor over water), it is a very complicated subject. Yes it does exist, but quantifying it would be an own
subject of research.
However the point hereto is, that clouds will be associated with low pressure and clear skies with high pressure. As air pressure itself
affects temperatures, it will make look clouds colder than they actually are with regard to their radial net effects. Allowing for this
factor will tilt the curve furthermore to the left, making adjusted clear skies colder, and cloudy skies warmer.

4. The 12.000ft ceiling issue. As stated before, there is a lot of imperfection in the weather data analyzed. If there was something more
precise I would embrace it, but reality means to work with what you have. Apart from those issues already named, and in the context
of this analysis, the absence of data beyond the named level means, there will be actually more clouds than reported on any scenario
except for OVC (where it does not matter). If you add clouds above 12.000ft to an OVC scenario it is still an OVC scenario. Any
other scenario is more affected. Clear skies could be FEW, SCT, BKN or OVC scenarios by the inclusion of the sky condition above.
FEW cloud be SCT, BKN or OVC, SCT could turn into BKN or OVC, and BKN could turn into OVC. Now that is logical, but not
even that important.
"On the other hand, high wispy clouds drifting by are less refreshing. Such clouds cast meagre shadows and, because they are
themselves cold, they trap heat radiated from the planet below. The air temperature near the ground might actually increase."
https://science.nasa.gov/science-news/science-at-nasa/2002/22apr_ceres

or..

"Clouds emit energy in proportion to their temperature. Low, warm clouds emit more thermal energy than high, cold clouds. .. This
means that a world without low clouds loses about the same amount of energy to space as a world with low clouds.

High clouds are much colder than low clouds and the surface. They radiate less energy to space than low clouds do. ..Because high
clouds absorb energy so efficiently, they have the potential to raise global temperatures. In a world with high clouds, much of the
energy that would otherwise escape to space is captured in the atmosphere. High clouds make the world a warmer place. If more high
clouds were to form, more heat energy radiating from the surface and lower atmosphere toward space would be trapped in the
atmosphere, and Earth’s average surface temperature would climb."
https://earthobservatory.nasa.gov/IOTD/view.php?id=44250

I am still trying to wrap my head around this narrative. Low clouds warm, high clouds cold, got it. But then it gets a way odd. High
clouds will emit little radiation into space because they are relatively cold. But won’t they emit only little radiation back to the surface
for the same reason? And then again, it would be a different story if high clouds reflected infrared back to the surface, because their
temperature would not matter in this case. The narrative however insists on (re-)emission of infrared, rather than reflection.
On the other side, warm low clouds are named to emit almost as much energy into space as the surface itself would, in the absence of
clouds. This seems to be somewhat ignorant of another important claim of the consensus model, according to which only 12% of
surface radiation go right into space while most infrared gets absorbed by GHGs within the atmosphere. So, low clouds are hindered
in radiating to space because of GHGs, while high clouds are not, or far less respectively.
For instance, let us compare a low and warm cloud (288K, close to surface) to a high and cold cloud (223K, 10.000m). The cold cloud
would emit only (223/288)^4 = 36% radiation as compared to the lower one. But then, up to 88% of the low clouds radiation gets
absorbed by atmospheres GHGs, while the high cloud would emit almost 100% into space. It is hard to image, how low clouds would
end up emitting more energy into space than high clouds, and the whole model seems a way stupid and not really thought through.

Yet we should take it serious, as such respectful institutions like NASA or the IPCC are backing it. If it was yet true, that high clouds
had a relative warming effect, while low clouds had a quite strong net cooling effect, then, by only looking at low clouds, we would
get a relevant negative bias. Then the sub-sample of low clouds must be associated with lower temperatures, than it were if we
included the warming high clouds (which we can not do, as we do not have the data).
What the data show however, is already a warming (or at best neutral) net effect of low clouds. So if there was any truth in the claim
of warming high clouds, clouds in total would have an even larger net heating effect.

Getting the numbers right


Regrettably there is no simple formula to tell us how an energy loss or gain by radiation will affect surface temperatures. One could
very well determine the amount of energy it takes to heat a liter of water by 10K, for instance. “Surface” in this context however is not
a quantifiable term, rather it is highly circumstantial. Convection with air or atmosphere (if there is any) will depend on its density.
Wind may be another factor. On the other side conduction will be very dependent on the structure of the soil. Large rocks will heat up
slowly, and pass on energy deeply beneath the surface. With liquid surfaces advection comes into play, and temperatures will be very
stable.
On the moon with its dusty surface and no atmosphere, the surface becomes a very thin layer with little mass, which is highly
sensitive to solar irradiance. A surface of ocean water would be the other extreme, where temperatures will be very stable.

So while we do not have a specific formula, we do have empiric data. We know that on land, given a clear sky and reasonable solar
irradiation, there will be a diurnal temperature variation of about 20K. If solar irradiance is weak, like during winter, it will be less.
Also vicinity to water will reduce this variation, which is specifically true for ocean bound places, but lakes and rivers may have some
influence too. Also it is important to note, that we are generally measuring air (at least 1.5m above soil), not soil temperatures. The
soil itself would show much more extreme temperatures. But most of all the diurnal variation is influenced by clouds.
A very simple analysis of diurnal temperatures relative to cloud condition gives us an idea on how rigidly a closed cloud layer can
shut down radial exchange. There are more sophisticated ways to analyze this question, like looking at night time cooling in relation
to the cloud situations, and doing so for individual places and so on. The results however are very much consistent. An average OVC
situation will effectively reduce emissivity (as measured by changes in temperature) by an impressive 85%, as compared to a clear
sky.
(Note: these data are completely unadjusted, the absolute temperatures may not be associated with particular cloud conditions)

Now things get very interesting, if we follow the consensus model. If only 12% of terrestrial infrared go directly into space, as shown
in the first graph, that will be 342*0.12 = 41W/m2. This is a very low figure as compared to what I would guess, but let us take it as
given. The 41 Watt figure already includes the average GHE of clouds, which means that “window” will be substantially larger with a
clear sky, but also much smaller with an overcast sky.
Average cloudiness reduces emissions by roughly 35%, meaning the remaining 65% would correspond to the 41Watt figure. Then a
clear sky would mean 63W/m2, and an overcast sky only about 10W/m2. Of course these figures would be derived from average
surface temperatures and would grow accordingly higher, or lower along with local temperatures.
I should also note, that the 35% figure would then correspond to average GHE of clouds of 22W/m2, which is somewhat lower than
the 24-31W/m2 you would find in the literature. Also I derived the 35% figure from US weather stations, which feature a lot arid
places with low humidity, low rain, and of course few clouds.
For that reason let us consider an alternative scenario, where clouds account for a GHE of 30W/m2. Then a clear sky would mean
41+30 = 71W/m2, and all overcast skies would still be around 10W/m2, maybe 11W/m2.

With a clear sky, temperatures will drop sharply during the night. If it is moderately warm we have a diurnal temperature variation of
up to 20K. As soon as the sun is gone, temperatures may be dropping by 1.5-2.0°K, which would correspond to the named 63-
71W/m2. So we finally have an educated guess on the relation of radial energy loss (or input) and surface temperatures.

This insight opens a highly relevant perspective on the claim, that clouds, specifically low ones, would have a net cooling effect of
13W/m2 (as CERES data suggest)2, 18W/m2 or a staggering 80W/m2(!?), as the consensus GH-model implicates. Please note, that
only the last figure can be reasonably argued when taking the consensus model serious.

It is safe to conclude, that clouds can only exert this cooling effect in their presence. If the sky is clear, clouds will not have an impact,
obviously. Now if clouds covered like 35% of the sky, and if these 35% were be responsible for the average 18W/m2 net cooling, then
an all overcast sky would then go along a net cooling effect of 18/0.35 = 51W/m2. Even if we take a larger percentage, like 40%, we
would still end up with 18/0.4 = 45W/m2.
Given the relation we derived above, both 45 or 51W/m2 of energy loss should easily be enough to drop temperatures by 1K per hour.
That is over day and night. Even during summer times, a solid cloud cover lasting for just two days, should easily drop temperatures
far below freezing. It will not get much better if we insinuate the 13W/m2 figure, and things would definitely drop off the chart once
we consider the 80W/m2.

What we see in reality is completely different. Once we adjust for the named factors, clouds are not associated with lower
temperatures at all. Rather temperatures are equally high, or higher with clouds than without. This fact alone rules out the possibility
of a net cooling effect.
So if clouds had a gross cooling effect of 110W/m2, their “GHE” must be at least 110W/m2, or higher. At this point it will not even
matter, whether we call this (re-)emission or reflection or terrestrial infrared. The result remains the same. Also it is worth noting, that
the 110W/m2 will be largely correct, while 44W/m2 figure named by the IPCC (likely anticipating its own contradiction) is
completely off limit. Clouds would then only account for about 40% of Earths albedo, and everyone can see that is not true. That is
next to zero upward emissions.

2
You may just as well use the IPCC figure of 44-31 = 13W/m2. Or 79-31= 38W/m2??? Or any net cooling effect you like. The basic
issues discussed here will prevail.
All this has significant implications. Solar irradiance is about 342W/m2, of which 236W/m2 “survive” the albedo effect. Most of the
albedo is of course due to clouds again. 236W/m2 would be enough to heat the surface to about 254K. Since we observe about 288K,
we are missing ((288/254)^4-1)*236 = 154W/m2, which corresponds to the GHE as we know it. But as the “GHE” of clouds alone
amounts to >110W/m2 the unaccounted for GHE, which may be due to “orther” GHGs diminishes substantially.
Yet there is one more major mistake in the consensus model by assuming a surface emissivity of 1, as if Earth was a perfect black
body. The physical properties of water give it a hemispheric emissivity of only 0.94, and other surface types on Earth have even lower
emissivities (specifically arid surfaces, which cover about 10% of the planet). With a specific emissivity of about 0.92, radial
emissivity is overestimated by roughly 30W/m2.
Both factors account for at least 110+30=140W/m2 versus a total GHE of 154W/m2, which leaves a very limited scope for GHG, if
there is any at all. With a maximum of only 15W/m2 remaining, all GHGs together could provide a GHE of up to 2K, although this
figure will likely be even smaller.
The GHE as such is not a reality, but merely a mistake. A mistake that some of the apologetics may well be aware of it seems, as they
would not seek to find exits to the obvious contradictions within their flawed models.

You might also like