Activity Feed › Discussion Forums › Strictly Surveying › How would you adjust this leveling data?
How would you adjust this leveling data?
Posted by geeoddmike on October 10, 2019 at 7:28 pmIn a ??recent? posting the use of least squares adjustment as ??data doctoring? prompted me to generate an example of its utility and rigor.
How would you derive heights for the unknown points in the level network shown below? What are the heights of the unknowns? How accurate are the new heights with respect to the known heights? Optional: Prove your answer is be best possible.
BTW, the sample data was taken from the text ??Linear Algebra, Geodesy and GPS? by Gilbert Strang and Kai Borre.
geeoddmike replied 4 years, 6 months ago 11 Members · 33 Replies- 33 Replies
Using Star*Net, I weighted the data at 0.006 mm/km (way higher than the normal 0.0015 m/km that I usually use).
MicroSurvey STAR*NET-PRO Version 9,2,4,226
Run Date: Thu Oct 10 2019 15:45:01Summary of Files Used and Option Settings
=========================================Project Folder and Data Files
Project Name TEST LEVELING
Project Folder C:PROJECTS
Data File List 1. test leveling.datProject Option Settings
STAR*NET Run Mode : Adjust with Error Propagation
Type of Adjustment : Lev
Project Units : Meters
Input/Output Coordinate Order : North-East
Create Coordinate File : YesInstrument Standard Error Settings
Project Default Instrument
Differential Levels : 0.006000 Meters / KmSummary of Unadjusted Input Observations
========================================Number of Entered Stations (Meters) = 3
Fixed Stations Elev Description
A 10.0210
B 10.3210
C 11.0020Number of Differential Level Observations (Meters) = 5
From To Elev Diff StdErr Length
A E 0.7320 0.0059 970
A F 1.9780 0.0060 1002
B E 0.4200 0.0062 1070
C F 0.9880 0.0063 1110
E F 1.2580 0.0057 890Adjustment Statistical Summary
==============================Number of Stations = 5
Number of Observations = 5
Number of Unknowns = 2
Number of Redundant Obs = 3Observation Count Sum Squares Error
of StdRes Factor
Level Data 5 4.636 1.243Total 5 4.636 1.243
The Chi-Square Test at 5.00% Level Passed
Lower/Upper Bounds (0.268/1.765)Adjusted Elevations and Error Propagation (Meters)
==================================================Station Elev StdDev 95% Description
A 10.0210 0.000000 0.000000
B 10.3210 0.000000 0.000000
C 11.0020 0.000000 0.000000
E 10.7445 0.003671 0.007195
F 11.9976 0.003711 0.007274Adjusted Observations and Residuals
===================================Adjusted Differential Level Observations (Meters)
From To Elev Diff Residual StdErr StdRes File:Line
A E 0.7235 -0.0085 0.0059 1.4 1:5
C F 0.9956 0.0076 0.0063 1.2 1:6
E F 1.2531 -0.0049 0.0057 0.9 1:8
B E 0.4235 0.0035 0.0062 0.6 1:7
A F 1.9766 -0.0014 0.0060 0.2 1:4Elapsed Time = 00:00:00
I got hung up on D in question #1.
- Posted by: @john-hamilton
Using Star*Net, I weighted the data at 0.006 mm/km
Project Default Instrument
Differential Levels : 0.006000 Meters / KmBeing pedantic here, I am assuming you mean 0.006 meters per sqrt(km). But neither you nor Star*Net mentioned the square root, so I’m not 100% sure how this is treated.
. I guess Point F should be labeled Point D or vice versa, as I can’t find D either.
.- Posted by: @bill93
I am assuming you mean 0.006 meters per sqrt(km). But neither you nor Star*Net mentioned the square root, so I’m not 100% sure how this is treated.
Standard error of height per unit distance. Star*Net also allows this error to be expressed as height per turn.
I pulled out the calculator and find the Star*Net standard errors listed above are 0.006 * sqrt(km).
The usual assumption in the textbooks is that the turn-by-turn errors are independent so the variances add, not the std deviations.
.To Bill93 and MightyMoe,
Sorry for the mislabeling. The point labeled “F” should have been labeled “D.”
I had hoped there would be some non-LSQ user responses showing how they approach this situation.
- Posted by: @bill93
I pulled out the calculator and find the Star*Net standard errors listed above are 0.006 * sqrt(km).
Since all the distances listed are right around 1 km, I’m not sure how you’re distinguishing between km and sqrt(km) in your calculations.
Here’s a partial screenshot from the Star*Net v6 instrument options dialog:
These units are consistent with, for example, the accuracy specification of my DNA03 level:
- Posted by: @jim-frame
I’m not sure how you’re distinguishing between km and sqrt(km) in your calculations.
Star*Net output:
From To Elev Diff StdErr Length
A E 0.7320 0.0059 970
A F 1.9780 0.0060 1002
B E 0.4200 0.0062 1070
C F 0.9880 0.0063 1110
E F 1.2580 0.0057 890I calculate:
0.006 * km 0.006 * sqrt(km)
0.0058 0.0059
0.0060 0.0060
0.0064 0.0062
0.0067 0.0063
0.0053 0.0057I guess Star*Net simplified their label since 1 km and sqrt(1 km) give the same answer. But it is obvious from those numbers that they are computing with sqrt(km).
. And if you just get the mathematical mean of the 2 points from the 2 lines (short/long) either forward/backward from B/C, the difference would still fall within the allowable 0.006*sqrt(km).
Attached below is a photo of the page in the text from which this posting was taken.
I note a reply suggests that an acceptable answer is to take the mean of two direct determinations from each known point and use the direct measurement between D and E how? Comparing this approach with the rich math detail from the adjustment packages should encourage its adoption. Not having any adjustment package, I use Matlab.
In an on-line set of lecture notes on least squares adjustments Prof Sneeuw describes the omission of data approach as follows:
- Posted by: @bill93
But it is obvious from those numbers that they are computing with sqrt(km).
To test this I ran a dummy data set that features inter-station spacings larger than 1 km, and the results show that the standard error of the inter-station lines is, indeed, being computed using sqrt(km). The Star*Net manual could certainly be more clear on this matter, but technically it’s correct in asking for the instrument standard error in 1 distance unit, since sqrt(1) = 1. As I noted previously, that’s also the way the Leica is spec’d.
So using the longer line would result in a difference/error in elevation of 3mm. I would then want to know why you would want to go through LSA when 3mm falls within the 1st order limit of 8mm?
Shouldn’t the distance from A to F be 1020 instead of 1002?
- Posted by: @jt50
I would then want to know why you would want to go through LSA when 3mm falls within the 1st order limit of 8mm?
If you don’t need to realistically characterize the errors of your work, there’s no reason to run a statistically valid adjustment. However, most geodetic leveling is used as the foundation for future work, and not having well-characterized errors of the original marks leaves future users without a basis to evaluate the accuracy of their own results.
Yes, typo
How does it affect the output?
What we do these days is to run levels with our electronic levels, dump the info into the program and push the button to get a lease squares adjustment, if we see anything odd we end up re-running it.
But if I were in the field with an automatic level and the given parameters I would do the run, then a checkbook reduction of this data on a sheet in the book, in about 5 minutes I come up with a value of 11.997 for D and 10.744 for E.
I ran A-B, adjusted E to 10.747 from 10.753
A-C, adjusted D to 11.995 from 11.999
B-C, checked in .001M, eyeball adjustment for D to 11.998 and no adjustment for E-10.741
mean 6mm at E and 3mm at D. A simple mean at that point.
Clearly there are some issues, there is a bit over 1cm floating in there, either with the fixed Bench marks or one of the runs, for the real world you would need some error budgets established to figure out if that’s an issue, for this exercise I would say the runs are good to go.
- Posted by: @jim-frame
However, most geodetic leveling is used as the foundation for future work, and not having well-characterized errors of the original marks leaves future users without a basis to evaluate the accuracy of their own results.
How would future users have access to these LSA notes? All you are given for a certain benchmark is the elevation of the point. I doubt it very much if the ordinary users would be able to dig up these LSA notes & I would further doubt if those future users would even care to look at those LSA notes.
- Posted by: @jt50
All you are given for a certain benchmark is the elevation of the point.
JS0768 *********************************************************************** JS0768 DESIGNATION - P 1200 JS0768 PID - JS0768 JS0768 STATE/COUNTY- CA/PLACER JS0768 COUNTRY - US JS0768 USGS QUAD - ROCKLIN (1981) JS0768 JS0768 *CURRENT SURVEY CONTROL JS0768 ______________________________________________________________________ JS0768* NAD 83(2011) POSITION- 38 49 43.37074(N) 121 11 25.70485(W) ADJUSTED JS0768* NAD 83(2011) ELLIP HT- 98.206 (meters) (06/27/12) ADJUSTED JS0768* NAD 83(2011) EPOCH - 2010.00 JS0768* NAVD 88 ORTHO HEIGHT - 127.675 (meters) 418.88 (feet) ADJUSTED JS0768 ______________________________________________________________________ JS0768 GEOID HEIGHT - -29.478 (meters) GEOID18 JS0768 NAD 83(2011) X - -2,576,657.191 (meters) COMP JS0768 NAD 83(2011) Y - -4,256,163.618 (meters) COMP JS0768 NAD 83(2011) Z - 3,977,583.218 (meters) COMP JS0768 LAPLACE CORR - 10.70 (seconds) DEFLEC18 JS0768 DYNAMIC HEIGHT - 127.588 (meters) 418.60 (feet) COMP JS0768 MODELED GRAVITY - 979,947.3 (mgal) NAVD 88 JS0768 JS0768 VERT ORDER - FIRST CLASS I JS0768 JS0768 Network accuracy estimates per FGDC Geospatial Positioning Accuracy JS0768 Standards: JS0768 FGDC (95% conf, cm) Standard deviation (cm) CorrNE JS0768 Horiz Ellip SD_N SD_E SD_h (unitless) JS0768 ------------------------------------------------------------------- JS0768 NETWORK 0.40 0.67 0.19 0.12 0.34 -0.06240227 JS0768 ------------------------------------------------------------------- JS0768 Click here for local accuracies and other accuracy information.
Log in to reply.