How certain can we be about commercial LiDARs mean wind speed measurements (offshore)?

Commercial vertical profiling wind LiDARs (hereafter: “LiDARs”) are measurement instruments that are taylor-made for wind energy purposes. One of these purposes is to measure, at a given location and for a long period of time (1 to 3 years), the mean wind speed- and direction every 10-minutes at different elevations above the surface (typically up to 200 mASL). The resulting time series are used as an input to wind resource- and energy yield assessments.

If you are not familar with wind resource- and yield energy assessments, see the following report from NREL, which summarizes well the exercise: (Clifton, Smith and Fields, 2016) “Wind Plant Preconstruction Energy Estimates: Current Practice and Opportunities”.

LiDAR manufacturers include for instance Leosphere and ZX Lidars. In this post I will use only examples from these two companies, but sure thing there are other manufacturers. LiDAR stands for “Light Detection And Ranging”, see here for a description of how they work and what they are used for. There are other types of wind LiDARS: nacelle LiDARs, scanning LiDARs – these are not part of today’s discussion (they are not widely used for wind resource assessments yet). A great introduction to wind LiDARs and how to help understand the atmospheric boundary layer can be found in “Remote Sensing for Wind Energy” (DTU, 2015).

LiDARs are well suited for wind measurement campaigns, in particular offshore where they replace expensive (and short) meteorological masts equipped with cup anemometers and wind vanes. A typical LiDAR data product is a dataset of 10-minute wind speed- and wind direction time series.

An important question for the wind analyst is then: how much can I trust these 10-minute time series, and in particular: mean wind speeds? If a two-years long LiDAR time series at 100 mASL has a mean value of 10.0 m/s, can it be that in reality the mean wind speed is only 9 m/s? or 11 m/s?

In order to assess how wrong LiDARs could be, one can think of two methods:

  1. Understand how the LiDAR works, assess each of the uncertainties associated to its functioning, and derive a total uncertainty;
  2. Compare the LiDAR measurements to some trusted measurement device.

It turns out that both methods have been tried. See here examples: for the first: “Flywheel calibration of a continuous-wave coherent Doppler wind lidar” (Pedersen and Courtney, 2019), and the second: “The long term stability of lidar calibrations” (Courtney and Nygaard, 2014).

Both methods have advantages and drawbacks. As of today, the second method is the most popular. The community of Wind & Site practitioners has settled on a formatted and well established way to compare LiDAR measurements with those of a cup anemometer mounted on a met mast. You can find examples of such validation at the below links:

  • Example of ZX LiDAR validation reports: LINK;
  • Example of Leosphere Windcube v2 validation report: LINK.

The reports are issued either by LiDAR manufacturers, or by trusted, experience third parties like DNV-GL, ECN/TNO. They compared concurrent time series (10-min averaging) between cups on met masts, and LiDARs. The main objective of thes reports is to demonstrate whether the LiDAR measurements fall within, or outside, certain accuracy and precision threshold; see below: to “pass” a test, the following results should be achieved:

  • the slope of the regresssion line (passing through zero= should be between 0.98 and 1.02;
  • the regression coefficient between the two time series should be at least 0.98;
  • the median offset in wind direction should be smaller than 5°, and the relative mean difference less than 3%.
These are the criteria a LiDAR must fulfill to be accepted for a commercial project. Source: https://www.windopzee.net/wp-content/uploads/2020/03/tno2019r10398.pdf

I have started to compile LiDAR validation reports that have been made publicly available, see the following online repository: https://drive.google.com/drive/folders/17nFnVlid3qoecSnpJg1FKZhvLABAYZeB

A peculiar situation

Interestingly enough, the vast majority of the LiDARs “pass” the test. I have personnaly never seen, until today, a LiDAR not fullfilling these criteria. As to the manufacturers, they report similar findings, see for instance: “Repeatability of ZephIR300 performance demonstrated across more than 240 IEC compliant verifications” (ZX LiDARs, 2016).

If one does not have access to project-specific documentation (usually confidential), there are a number of sources with validation reports, and all show successful tests; see for instance:

When I find the time, I should make a big overview of these test results; they all show the same thing: mean deviations are always reported to be smaller than 2%.

Wind measurement uncertainty

Now that most of us are convinced that LiDARs measure equally well as cup anemometers, comes the question of assessing an uncertainty to the measured mean wind speed value we discussed earlier (10.0 m/s at 100 mASL). Typically this value is assessed by assessing the standard error of the measurements, in percents.

The international standards IEC61400-12 provide a sound basis for assessing the uncertainty of cup- and LiDAR measurements. The Offshore Wind Accelerator, an R&D organisation, has produced a nice report entitled “Lidar Uncertainty Standard Review Methodology Review and
Recommendations
” (OWA, 2018-06). The report deals primarily with floating LiDAR (thereby, adding an additional uncertainty term due to the buoy motion), but essentially the key message is that the IEC methodology, if applied scrupulously, leads to very large uncertainties, see below: up to 8%. On the other hand, the proposed, revised methodology lead to a 2% uncertainty.

Reproduced from the figure on page 4 of the above-mentioned OWA report.

This number of 2% uncertainty essentially comes from the nominal uncertainty assigned to the reference cup anemometer. As shown on the plot to the right of the Figure above, the uncertainty of the cup in a wind tunnel (the combination of the “calibration” and “classification” uncertainties) is small (less than 1%). The reason for assigning a larger value of uncertainty to cup measurements on met masts is, well, the met mast: due to the mast disturbance, cup anemometer wind speed measurements are often biased; this bias is very hard to quantify, and horror stories from the not-so-distant past have told Wind & Site practitioner to not rely too much on cup anemometers. That was for the 2%.

Because we use method no.2 here, cups are our “truth”/reference, and as Mr. Courtney put it in 2015: “The reference wind speed (cup) uncertainty
DOMINATES lidar uncertainty
” In practice, this means that LiDAR measurements are routinely assigned uncertainties larger or equal to 2%; see for instance an example of a recent wind resource assessment for Hollandse Kust (noord) wind farm zone in Holland “Webinar Wind Resource Assessment for Hollandse Kust (noord)” (2019-05-16), on slide 17 there is an overview of the uncertainty components: the measurement uncertainty for the LiDAR is there set to 3.3%.

Example of uncertainty table from a recent, state-of-the-art wind resource assessment (see link in the text above).

For a mean wind speed of 10.0 m/s, this translates to a 90%-percent probability threshold of approx. 9.6 m/s. You can make the calculation yourself using this applet: https://homepage.divms.uiowa.edu/~mbognar/applets/normal.html

The 90-percent exceedance threshold can be computed from the mean, and the standard deviation (in this article, it is called standard error or uncertainty).

How many of them?

Now, let us pause for a moment. We have seen that:

  • Since cups have been (and to some extent, still are) somewhat inaccurate instruments when mounted on met masts, they are assigned a nominal standard error of 2%;
  • LiDARs, since they are compared with cups, are assigned a standard error which is at least as large as 2%;
  • Yet, the vast majority of the LiDARs vs cups comparisons show mean relative differences smaller than 2%.

How could this be? Well, intuitively, if both instruments had a 2% standard error, we should be seeing LiDARs “failing” tests; the question is: how many of them?

I have designed a small test in MATLAB to answer this question. It works like this: let us imagine that we want to test a number of LiDARs, say 250 of them:

  1. We will compare them with cups, and we will assign 10 LiDARs to each cup (i.e. a total of 25 different cups);
  2. Then, we’ll assign a standard error to both the cups and the LiDARs. It can be for instance, 2% for each;
  3. The next will be to realise 250 tests. We will focus on the 10 m/s wind speed: for each test we will randomly pick a value of measured wind speed from the LiDAR, and one for the cup. For instance, it can be that for the first test, the cup will measure 10.2 m/s, and the LiDAR 9.9 m/s.
  4. Then, we’ll compute the relative difference between the two. In the example above, for the first test, it is (9.9-10.2)/10.2 = -2.9%, that is: the LiDAR fails the test (argh).

At the end, we can count how many LiDAR failed, out of 250. In order to cover many different realisations of this experiment, we’ll repeat the whole thing (i.e. the 250 test) 500 times. Some results are presented below: they show box plots representing the distribution of LiDAR failing the tests (out of 250). Since we have 500 different realisations, there is some stochasticity, yet the message is somewhat clear: with a 2% standard error for each instrument, the median number of LiDAR failing is 119. As expected, this number decreases, and becomes 0 when each are assigned standard errors of 0.5%.

Results of the test with 500 times 250 LiDARs tested (10 LiDARs per cup, 25 cups in total).

We can repeat the test with, this time, 25 LiDARs per cup, and 10 cups in total, see below: same conclusion, we should be seeing LiDARs failing validation tests, if their uncertainty and that of the cup was 2%.

Results of the test with 500 times 250 LiDARs tested (10 LiDARs per cup, 25 cups in total).

Where is that leading us?

The discussion above may first of all help raise awareness about this peculiar situation (i.e. that cups and LiDAR are assigned a nominal uncertainty of 2%, while all validation tests show relative mean differences smaller than 2%).

Secondly, it may help reduce LiDAR uncertainty, which we know is a key contributor to the overall annual energy production uncertainty. This in term, may help reconsider the other uncertainty terms, in particular thos related to the “wind-to-power” calculations (power curve, wakes, etc), as well as the ones related to the operation of the turbines.

In effect, I do not consider that the above discussion should make practitioners reduce uncertainties, by reducing wind speed uncertainty. I simply think that uncertainty budgets need be refined, so as to better oritentate future research agenda or validation works. Yet, in some particular situations (power curve test for instance, a reduction of wind uncertainty may be useful).

Comments, questions and suggestions are welcome,

Rémi.

Update 2021-04-22:

ICYMI, please see the recent developments of the “vector vs scalar” averaging question for Leosphere LiDARs: https://www.vaisala.com/sites/default/files/documents/WEA-GLO-LEO-Brief-HybridWind-DID65257en-A.pdf.

The topic has been discussed for some time already, see:

I shall come back to this topic in a future post, looking at it from an offshore wind perspective. It is about time ^^.

2 Comments

  1. Hi Remi

    I understand the OWA report shows the comparison results. However, it does say give the breakup or detailed explanation? Do we have any spreadsheet (or detailed uncertainty calculation) which demonstrate IEC and OWA method for uncertainty calculation?

Comments are closed.