Activity Feed › Discussion Forums › GNSS & Geodesy › RTK and Post-Process Results
RTK and Post-Process Results
Posted by jt50 on October 5, 2020 at 9:29 amIs there a way to reproduce the RTK coordinates in the office through post-processing? I can’t seem to get the same results using only 1 base/1rover setup using kinematic processing. Is it possible to use the corrections from the base and recompute based on the timestamp of a fix status point?
paul-in-pa replied 3 years, 6 months ago 9 Members · 20 Replies- 20 Replies
I don’t know.
I do know that how different brands do things, varies. And, within brands, things vary, as technology expands.
I have a javad, and it allows RTK, in the field. But, it now, (as of about a month or 3 ago) it seems to store the parallel base/rover files, in the reciever, AND allow post processing in the field. This coord, generated via post processing in the field is usually a few hundredths away from the rtk generated coord. This is a completely different algorithm, from rtk, and theoretically is a stronger coord. I’ve not proven this, but from what I read it WILL eventually (if not now) become a stronger solution.
So, I think I understand your question. Is it possible to regenerate the rtk work? Back 20 (!) Yrs ago when TDS was an entity, there was some sort of mechanism in TDS to modify the base coord, and all sideshots from that base. (Essentially moving the underlying geodetic lat/Lon) but, I cannot remember how it was done. From what I read, there is some mechanism in Trimble office software, that does something like what you are asking, PROVIDED you have the raw data intact.
So, did you loose a file, raw file, or something?
If you tell us your equipment brand, and file type, it’d help.
There are people on this forum that are 10x smarter than me, and some are smarter than that. (Once you get that smart, you have to find somebody to help them find their slippers, and put them on, but that’s another story!)
(Grin)
I’ve ALWAYS been fascinated/uncomfortable by the fact that I cannot re-do, and review the raw data, for rtk, in the SAME way I can review the raw field book data, generated from the “old way” we used to do.
So, this essentially MADE me a dependent on a set of computations, spit out of my data collector, that I cannot work out with my TI-30, and a cup of coffee.
It makes for a bizarre form of trust, in the Mfr. That I’m not exactly happy with. More or Less. But, there’s no goin back. Your post hit a nerve. I hope you forgive the semi- hi-jack
I think you will get what you need, so long as your trust in the Mfr. Is working. At least within a couple of hunnerts!
Thank you,
Nate
- Posted by: @jt50
Is there a way to reproduce the RTK coordinates in the office through post-processing?
One requirement for that is that your rover setup must store the raw data from all the satellites, not just the computer positions. Can yours do this and do you have it selected?
I’ve done post processing for base-rover vectors using receivers that stored the sat data with good results. The software let’s me add as many CORS as I want to the solution.
If using a proprietary network, you would need to download their equivalent station data.
. Yes, it’s called RTK and logging. A survey style available in Trimble, don’t know about any other vendors.
As Bill mentioned, the critical thing for post-processing is to store vectors (and associated quality information) rather than positions. Only then will you be able to post-process and update positions.
Unless there is something strange going on in the controller, or in your post-processing software, after importing just the raw data the vectors (and subsequent points) should reflect what was seen in the field. I would double-check the coordinates of the base position first.
“…people will come to love their oppression, to adore the technologies that undo their capacities to think.” -Neil PostmanThanks for the reply. We were doing a topo in relatively low wooded areas. We completed a section of the site last week. Yesterday, we processed last week’s data and found that there were bald spots in the data. When the survey team went back to take RTK shots of the bald area, they were able to overlap with last week’s data. After processing, we found that the overlap area points from the 2 sessions were off by 2+ meters. Only some portions were off in elevations. Reviewed the RTK csv file and all points were Fixed with RMS values less than 0.10m.
I could not find where the errors were coming from. This was why I wanted a procedure to reprocess all the RTK data in the office to see if I can debug it.
I used 2 post-processing software to compare the kinematic results. Magnet & RTKLib. Both returned different values for the same timestamped data. I had a colleague recheck the base coordinates, HI, pole ht to see if I made an error. Both base information are correct but the 2 software returned different results between themselves & the RTK points. Now I am really confused.
- Posted by: @rover83
the critical thing for post-processing is to store vectors (and associated quality information)
I think it is pseudoranges from each satellite that are needed. Vectors usually refer to distance and direction between two stations, and is only a little more information than positions, having lost the time dependence in the sat data.
. Good point, should have specified that I was referring to “post-processed RTK” rather than PPK.
Actual PPK processing, in my experience, results in slightly different coordinates than RTK, which I have always chalked up to my (rudimentary) understanding that RTK processing engines operate differently than PPK algorithms.
“…people will come to love their oppression, to adore the technologies that undo their capacities to think.” -Neil PostmanLLH, actually I am comparing the H values and roughly converting the LL differences.
I have had an issue with groups of points not fitting due to an error in field work, or other unexplained error. in my case usually caused by someone using with equipment with little experience of training. Although many years ago there were sometimes seemly random points shifts, with groups of points off by a metre.
If not other way to correct, I would then adjust the points in cad (civil 3D), create point groups and shift horizontal and or vertical to correct. So that the data matches.
If in doubt I would survey check in the field to ensure everything fits as it should.
@jt1950
Honest answer: I think you may have some bad inits out there. We’re these redundant shots (overlapping data) under canopy?
Fixed does not always mean “correct”. It can mean your unit is over there saying: “fooled me too” 🙂
My thoughts are: develop a sequence.
Set up base. Shoot a sideshot to an easy, local item, every day out. This can help you catch a bad hi at the base, or a bad rod height.
My first guess is you may have some bad shots out there.
One way to demonstrate this is to shoot c/l on a nice straight, (both horiz and vert) section of road, that runs under thick trees. Import this into cad, and look at the weird potholes, and mountains, that show up in the drawing, that don’t exist on the ground.
Thank you,
N
But how many of us can afford the extra time to recheck even just every 10% of all points surveyed. It is similar to TS shots where you just shoot as you go but then you have a closure check for the traverse loop. In RTK, it’s different. I read that the FIX means that only 67% of the shot’s value is correct. If it means what I think it means it is saying that we are reading field data that’s MAYBE 1/3 erroneous for every shot taken?
Which brings to mind how many topos that we have done using RTK that had erroneous data because of this 67% threshold.
yes, I have the raw logs from both the base & rover. These were what I used to post processed (PPK) but using 2 post processing software, I got different results for each timestamp point. Magnet & RtkLib are producing 2 different elevations for the same timestamp point. Which is causing us to lose confidence in the RTK shot itself.
From the discussions here, am I to understand that RTK coordinates are as-is recorded on the data recorder? Are the ambiguities not recorded in the raw logs for us to reprocess in the office?
Many surveyors use (abuse) rtk. It is expected that UNDER CANOPY that it will contain some bad data.
Your work sequence should be built around this limitation.
Such as, go slower under canopy. Take more shots. Spend more time on them. Take shots in the open, and use a hand level, compass, and tape to extend into canopy. (I did all this in my Topcon legacy E days)
With Javad, there are algorithms that do 2 things. Fully verify each shot. (Slow but sure) or, go faster, with higher density of shots, and weed out the bad in the office. I do a mix and match of both.
Your field guy should check your office guy too.
But, no matter your brand, your goal is “truth”.
Rtk and woods requires special consideration. There is no easy way around this.
“Truth”, woods, and 2 seconds seconds per shot is “fantasy surveying”. It is more like 30 seconds, to 4 minutes per shot, in the partially obstructed areas. If you run the ravines and creeks, (as I do) with full verification, (this is near 100% confidence shots), it really can help weed out the bad. Time is money. If you care about your topo, it takes time.
Thank you,
Nate
For the vast majority of modern data collectors, positional error in northings/eastings/elevations are reported to the 1-sigma, or 68% confidence level, which is most likely where that 67% figure originated.
However, it does not mean that each observation has a 1/3 chance of being “wrong”, and has nothing to do with the characteristics of an RTK fix. A fix only means that the receiver has solved for the integer ambiguity and will apply that solution to the current RTK session.
While a bad fix (incorrect integer N-value) can happen, it is rare to actually get a bad fix with modern instruments unless the operator is really pushing the equipment to work in poor satellite conditions. Even then, the solution is typically reevaluated regularly, although the time in between checks varies by manufacturer. I can count on one hand the number of times I have gotten a bad fix and lost more than 5-10 minutes of work. Usually (as in the case of Trimble gear) it will tell you how many shots were affected.
Best practices / industry standards for land surveying typically specify 95% confidence level for reporting positional error, thus field operators need to be mindful that those precisions on the screen need to be doubled in order to see the “survey-grade” precision of the observations. Again, using RTK does not mean that there is a 1/3 chance your shots are wrong.
Critical points should probably be observed multiple times, but how many times and for how long depends on the conditions, equipment, and requirements of the survey.
“…people will come to love their oppression, to adore the technologies that undo their capacities to think.” -Neil PostmanWe take shots in the woods for lidar VVA-FOREST check points. Up until recently we always set a pair in the open with VRS and then used a total station to get the woods shot. Now, with an R12, we have been getting these shots SOMETIMES with GPS. We do a 3 minutes observations, then dump all the satellites and reinitialize. And this is done with a base nearby, within typically 50 meters, NOT using VRS. The nearby base does make a tie with VRS to the NSRS.
I say SOMETIMES because I just talked to my employee this morning in the field, he is having trouble getting it to initialize in thick woods in down south in an eastern seaboard state, and had to resort to the old total station method on a few points (we need 49 woods points for this project). Up until now we have had good success using the R12, but maybe not in thick southern woods.
I often wonder WHY the RTK engine on the rover can initialize in mere seconds, yet to post process we need much longer data sets. Obviously the processing engine is different in the rover than what is in TBC, but why can’t they make an office processor that will solve for the integers in seconds like we can do in real time? Maybe it is a matter of storing different types of observables than we do now, but if we could store exactly what is transmitted over the cell/radio link, and combine it with what is stored at the rover, it seems to me we should be able to post process RTK data the same as in real time.
I’ve done a lot of testing when it comes to topo shots. We will generally run some cross sections with a different set-up at the base to check the topo. You want a different HI at the base with a different measure up point. Same on the rover. I’ve also done RTK checks against PPK and they are very compatible. PPK sessions that remain fixed will usually resolve to a CORS point even many miles away which will give you a check on the base and topo points.
Friday we were looking at a ground topo that was created for a 2007 mapping project using 2003 control panel points. I took topo shots to do a new ground topo with my R8’s and our local control point. When I plotted it against the 2007 topo there was not enough difference to change anything, I’m going to tell the engineer use it as is. So it’s possible to have data match very well, RTK-PPK-Aerial mapping should be able to merge seamlessly, if not you must have a problem either with bad shots or miss-matched processing software. I would advise you to resurvey it, it sounds like there are issues with the data. It happens to everyone. Since you are in trees it’s not unexpected.
In general, RTK and post-processing process data much differently from one another. RTK resolves ambiguities from small segments of data (perhaps only a few seconds, along with statistics that are carried forward from the last engine reset). Post-processing processes all raw data (sometimes forward and backward). Neither is necessarily better than the other in a broad sense, but each has its relative strengths which can be applied in various environments. Sometimes even in the same environment with a small difference in time (i.e. constellation geometry) can favor one solution over the other.
Post-processing is benefited (or perhaps harmed at times by history). The data collected early in the observation affects the solution as much as the data at the end. This history can be used to isolate and remove bad data to arrive at a good solution in a bad environment. It can also steer the processor in the wrong direction if the bad data becomes the historical basis. RTK is not biased by this history since it is in-fact real-time, but it also does not benefit from the history either.
In my opinion, it is nice to have both!
Regarding precision values (RMS) for determining the quality of a fixed solution – this is a poor approach to QC/QA. We see the difference in “precision” and “accuracy” as applied in the real world. It is quite possible to have good RMS with a bad fix. The bad fix represents bad accuracy, even though there is epoch by epoch consistency in the position for a few seconds or perhaps even a few minutes. The spread might be only a few millimeters over a relatively short period of time, however the actual position is off by several meters.
Previously the very best way for the user to measure accuracy (good fix vs. bad fix) was by the passing of time. The change of time presented a different geometry of satellites which would present signals affected by multi-path differently. Now, with so many satellites available, it is possible to use some satellites in one processing scheme and then use others in another processing scheme, simultaneously. This represents a different geometry of satellites without the need for waiting. So if one processor using GPS and Glonass, for example, gives the same answer that another processor using Galileo and Beidou, then the likelihood of the solution being correct is very good. Also because RTK and post-processing vary so much in the way that solutions are determined, it is also a very good indication of a good fix if RTK and PP give a similar answer. Of course, until recently, the only way to know if the post-processed solution matched RTK was to wait until you could return to the office to process the data and make the comparison with RTK. Now there are applications that will post-process data in near real-time and present the comparison between RTK and PP before the user ever even leaves the point.
RTK being real time can only use the current and past information. Post Processing benefits from the programs ability to take later information and calculate backwards to extend data partially or fully through some data gap. That may be just the inclusion of one more satellite for a few more seconds. You have to be prepared to use those differences to your own benefit. One way is to be sure that a few RTK points are located several times throughout your observation sessions. That allows you to better compare and possibly adjust your RTK observations. As much as you would like to assume that your RTK data is better than anything else, you would be wrong. There is very good reason for Post Processing to exist. In fact some RTK software can now instantly recognize such a reobservation and can automatically adjust point positions.
Another point not mentioned that many are unfamiliar with is that some RTK solutions may be from L1 only data, not that that is bad. From time to time in the past I found that forcing an L1 only solution gave better precision than an L1/L2 solution. That possibility is now multiplied across several constellations. Post Processing is more conducive to using and comparing various frequencies of data.
I would surmise this same question has originated again and again from those who began GPS surveying from RTK only and not from those who began with static and then added RTK to the mix.
Paul in PA
Log in to reply.