Ostrander Ski Hut gps points, trail, and maps

posted Feb 24, 2017, 8:54 AM by Ted Bibby   [ updated Feb 28, 2017, 10:38 AM ]

Ostrander Ski Hut gps points, trail, and maps: Horizon Ridge Trail

Here are the basic waypoints provided by Yosemite Conservancy (https://www.yosemiteconservancy.org/ostrander-ski-hut) for the Horizon Ridge Trail to Ostrander Hut for winter travel.
The original waypoints were provided by www.ostranderhut.com and the original .pdf is in the file section below. The projection from the original .pdf is a bit strange and Sean McKnight updated them and provided the .kml file for use with Google Earth. I've provided the .gpx for use with Garmin's Basecamp so you can load to your GPS and also a .csv for those who prefer that sort of thing. The way points do a pretty good job of marking the trail. We spent two nights at the hut, had spectacular weather, and even had time for some leisure skiing at the surrounding bowls.

View of the waypoints in GoolgeEarth

Fresh snow in Yosemite from the tunnel lookout: Feb. 27th, 2017

How to synchronize folders and files with Cyberduck

posted Feb 15, 2017, 4:55 PM by Ted Bibby   [ updated Feb 24, 2017, 9:16 AM ]

There's a handful of resources on how to synchronize files from your desktop or external drive or thumb drive to a remote site using ftp. My goal was to backup my cloud storage to an external drive connected to my desktop. Other may want their transfers to go the other direction. You may want to backup your local harddrive to a remote site. Either way will work with cyberduck. I tried this initially in filezilla, but there doesn't seem to be a clean way to do it sadly. I really prefer filezilla's interface but cyberduck does the job.

This site has a good background that I won't repeat, but the visuals are old.

Steps below:

Texture Mapping in Cyclone

posted Sep 8, 2016, 2:42 PM by Ted Bibby   [ updated Sep 8, 2016, 3:37 PM ]

Overlaying an image on a LiDAR point cloud, aka Texture Mapping:

It took me a while to work this out, so I figured I'd lay out the steps I went through.
The goal here is to take a photo or image taken in the field and place it on a point cloud. Ideally the pointcloud is correctly georeferenced in space. It's just the image that is random. This could be something from your iphone you took while doing field work, and now you want a precise measurement of the distance between two items in your photo, or you want the XYZ coordinates of something you can identify in a photo but not in the point cloud, or the image acquired by the scanner was really poor. The short description here is to use the Texture mapping toolbar, create reference points between the image and the point cloud, and then merge the two in your model space.

I'm using Leica's Cyclone Modeler.

Open up Cyclone Navigator and import the image from your camera. Right click on Model Space>Import>Navigate to your file

The image should show up with a little camera next to it.

Next, open your model space with the point cloud you want to overlay the image on and navigate to the Texture Map Browser.

If you have no points selected the browser will be grayed out.

Select a point in the point cloud roughly where you'll want the picture placed. Does not have to be perfect. Once you've selected a point, the Texture Map Browser will become active. Click the "+" symbol to select an image and click "OK"
If a pop-up comes up asking Perspective or Ortho, click perspective.

The Texture Editor should open up and show you the picture you want to overlay on the point cloud. You now need to create reference points between the image and the point cloud.

Click control points on the Texture map.  Holding down the mouse button will allow you to zoom in and around the image to select individual pixels.

Now in your model space click the exact same locations to reference the image to on your point cloud. Use Multi- Pick Mode. Matching #'s should appear with each click.

Now add the control points via the Texture Editor: Click the button that looks like a yellow knot. All your matching control points should appear.
Then click "Compute". You may have to adjust some of the tolerances for pixels in Edit Preferences button>modeling>Texture Map

Click OK,
Click Close
Select any point in the point cloud, and highlight the row with the picture's name in Texture Browser
Click Save
Click Yes to Accept rendering
Click OK on next message, and Restart Cyclone.

After Opening Cyclone again, click a point in the point cloud. then right click and change color map option.

If the point cloud colors don't change immediately, you may need to click change the point cloud rendering tool bar to use colors from scanner. Not the hue intensity map.

And hopefully all went well and now your image is rendered to the point cloud.

Additional description of steps:
The command is in the menu "Edit Object, Appearance, and Texture Map Browser". 

1. Turn the View > Global Texture Map option ON 
2. Start the process of “mapping” common points in point clouds to pixels in photos. Import relevant photos into the project from Navigator (Right Click > Import) 
3. Select Edit Object > Appearance > Texture Map Browser 
5. Pick a point anywhere on point cloud to enable the ADD (“+”) button. 
6. Click on ADD button ( ) 
7. Select a Photo, and OK on next message (if photo is from Digital Camera) 
8. Picture (and Texture Editor) should now be visible on screen for mapping common points. 
9. Deselect (un-pick) any points currently picked on point cloud. 
10. Start the process of “mapping” common points in point clouds to pixels in photos. 
11. Right-Click on Picture, and adjust the zoom if necessary. 
12. Single Clicking and holding down button on picture automatically zooms in, and releasing mouse button picks the first point to be mapped. 
13. Select at least 7 points on the picture by clicking and releasing the mouse button. 
14. Now select the same 7 points on point cloud 
15. Click ADD button on Texture Editor 
16. Now, Click the COMPUTE button, and notice the error at the bottom (Smaller the better) 
17. You may get an error message to adjust tolerance. 
18. Points can be reselected, followed by COMPUTE operation until desired accuracy is obtained. 
19. Click CLOSE button. 
20. Select any point on the cloud, and highlight the row with picture’s name in Texture Browser. 
21. Click on SAVE button. 
22. Click Yes to accept Recording of colors to Point Cloud (CAUTION: Colors from Scanner will be lost once cloud takes the colors from photograph) 
23. Click OK on next message, and RESTART Cyclone. 
24. After opening the modelspace, if the colors are not applied, select the point cloud, click the right mouse button, and check “Apply Color Map | Image Texture Map.” Color from the image will immediately apply. 

Note to keep the left mouse button pressed while picking a point in the image (ZOOM IN).Use the right mouse button to change between single-pick and multi-pick. 
Please also find the Cyclone Manual installed under ...\Leica Geosystems\Cyclone. There you will also find some information about Texture Mapping.

Monte Carlo Simulations and solving multi-variable differential equations

posted Nov 13, 2014, 8:56 AM by Ted Bibby   [ updated Nov 13, 2014, 1:19 PM ]

Here's a generalized run down of how I have solved multiple variable equations with multiple acceptable solutions, and then further modeled the probability of the solution based on measurement error. Dr. Daniel Morgan (Vanderbilt) has been instrumental in applying this method to modeling surface processes from concentrations of cosmogenic isotopes in rocks. I use Matlab to run my models but other languages work too.

Papers in my field that use Chi-Squared minimization:
1.                   Balco, G. & Stone, J. O. Measuring middle Pleistocene erosion rates with cosmic-ray-produced nuclides in buried alluvial sediment, Fisher Valley, southeastern Utah. Earth Surf. Process. Landforms 30, 1051–1067 (2005).         
2.                   Morgan, D. J., Putkonen, J. K., Balco, G. & Stone, J. O. Degradation of glacial deposits quantified with cosmogenic nuclides, Quartermain Mountains, Antarctica. Earth Surf. Process. Landforms 36, 217–228 (2010).
3.                   Balco, G. Contributions and unrealized potential contributions of cosmogenic-nuclide exposure dating to glacier chronology, 1990–2010. Quat. Sci. Rev. 30, 3–27 (2011).
4.                   Balco, G., Stone, J. O., Lifton, N. a. & Dunai, T. J. A complete and easily accessible means of calculating surface exposure ages or erosion rates from 10Be and 26Al measurements. Quat. Geochronol. 3, 174–195 (2008).
5.                   Auer, M. et al. Cosmogenic 26Al in the atmosphere and the prospect of a 26Al/10Be chronometer to date old ice. Earth Planet. Sci. Lett. 287, 453–462 (2009).
6.                   Vermeesch, P. et al. Interlaboratory comparison of cosmogenic 21Ne in quartz. Quat. Geochronol. (2012).

Background: When we measure things in the lab, like concentration of arsenic in water, there is an associated error with that measurement. One normally sees measurements reported as: 25 +/- 0.23 g/ml.
Now, if we have an equation that we solve for some other variable(s) that use the measured arsenic concentration, then there is an associated error with the solution. This is general propagation of measurement error and there are many (better than I) resources that explains how it works.

In my work, I measure concentrations of isotopes (think chemicals) in depth profiles from the ground surface to about a meter below ground. In the picture below you'll see our sampling strategy. Every 10 cm or so we collect some dirt and measure the concentration of isotopes (represented by purple dots). Now, based on some principles of the cosmogenic isotope method, the concentration at depth should follow an expected pattern (pink line) which in my case is mathematically described by a multi-variable equation with 3 unknowns (simplest version). For example sake, the concentration at depth  is controlled by 1) time, 2) previous isotope concentrations (think contamination), and 3) the local erosion rate.

Chi-Squared minimization:

So, how do you solve for these three unknowns? You change the 3 variables over and over and over till the resulting line matches the measured concentrations. We can then calculate the fit of our line to our data (see linear regression). I use chi-squared.

In Matlab, you'll need a Function to solve your equation(Csolution=x+y+z). We have our inputs (3 unknowns) x, y, z, (and 1 known) Cmeasured. So you call on the function, input your variables and calculate Csolution. You can change the variables over and over and over and generate many solutions for c, but we want to match our solution to actual measured concentrations.

for    fit=Function(x,y,z)


This gives us an answer for Csolution.

Now we modify our function to find the difference between our solution and our measurements

for    fit=Function(x,y,z,Cmeasured)



So every time you change your input variables you'll get two answers. Csolution (calculated from your unknowns), and Diff.
To generate a line that fits our data as closely as possible we want to minimize Diff, so that it's as close to zero as possible.
We could manually do this over and over and over, and make a spreadsheet to record our results, but Matlab will do it for us. This is the basic foundation of chi-squared minimization. The above equation to calculate Diff , is used as a simplified example, the equation for chi-squared is slightly more complicated.

I use a function written by John D'Errico called fminsearchcon(), it's modified from fminsearch(), but allows you to put limits on acceptable answers. Such as if you know x is greater than 10 but less than 50 and you only want solutions for x between 10 and 50.

So this function's job is to modify the input variables x, y, z, over and over and over till it generates the smallest chi-squared value. And once it does that, it spits out what x, y, z, are. 
Ok great! now I have three answers to three unknowns, but how can I prove that they are statistically valid given the my measurement errors? If this doesn't make sense think about this: If my Cmeasured is 25 +/- 0.3, then my solution of x, y, z, only takes into account Cmeasured as 25. But, what if Cmeasured is 25.03? What would be my resulting solution of x, y, z, then? What if Cmeasured is 24.07? This becomes increasingly complicated when you're generating a single solution to fit 2 or more points. 

Here, two plots show the variable solutions. The initial solution (1st figure) over shoots fitting a line to the circle data points (Diff=20), the second solution (2nd figure) adjust the unknowns, thereby lowering the expected concentrations and giving a better fit (Diff = 2). Ignore the square sample points, they were not used in the line fit statistics.

Think about this: If you have four data points of measurements (Cmeasured), then you will have one solution for each data point, thus four (Csolution)'s, as a function of three unknowns.

ex) You've measure 4 data points, so Cmeasured = [ 2+/-0.03, 2.3+/-0.03, 2.8+/-0.03, 3+/-0.03] (I have the error sub scripted to remind us that there is measurement error)
        and you use your function to solve for Csolution, then you've generated a solution for each data point Csolution= [ 2, 2, 2, 2] (or whatever the corresponding solution is to depth)
        and our function calculates the differences between Cmeasured and Csolution, so Diff = [0, 0.3, 0.8, 1]
        and the sum of Diff = 2.1, for sake of argument this is our minimized chi-squared value
        now, what if your Cmeasured was actually the max error, or min error, or any fraction anywhere in between? What would be Csolution then? And what would the resulting Diff be?
You get where i'm going? All of a sudden you have generated a ton of solutions for  x, y, z, and they are all, for better or for worse, acceptable.
Now you need to prove that there is a single "best"  x, y, zsolution, and define what the +/-  x, y, z, boundaries of the acceptable solution are.

On to Monte Carlo Simulations:

Ok, so we're able to solve for 3 unknowns, but we want to model what all other solutions would be if our Cmeasured was any other number within the rage of the reported errors. To do this, we need to write some code that randomly generates new Cmeasured numbers that fall within the measured error for each point. The idea is that for any number of times say 10, we generate a new random set of Cmeasured values. Time 1, generate random numbers and solve for minimized Diff. Time 2, again generate random numbers and solve, and again, and again, and again. The more times you run this loop the better and all along the way record the solutions.

The code would look something like this:

for i=1:100         %time steps (i.e. do 100 times)

Cmeasured(i)=random_number_between_(Cmeasured+error)_and_(Cmeasured-error)        %generate random data points with error bounds for each point

    for    fit=Function(x,y,z,Cmeasured)        %solve/minimize x, y, z, for randomly generated Cmeasured

            Recordx(i)=x                                 % record the x value for each iteration in an array
            Recordy(i)=y                                 % record the y value for each iteration in an array   
            Recordz(i)=z                                 % record the z value for each iteration in an array  
            RecordDiff(i)=x                              % record the Diff value for each iteration in an array


Once the code has cycled through itself 100 times (or more depending on robustness) you will have results of all your possible solutions given any random data point with the error.
The next step is to visualize this. Matlab has a convenient "cumulative distribution function" plot that organizes all the potential solutions and ranks them.
The top figure here is a bar plot of my solution distributions divided into bins. In the above plot "time" is one of my solved variables. Red line marks mean solution, green lines mark 68% confidence intervals. The bottom plot is what cdfplot() returns (blue line) within Matlab, showing the cumulative distributions of my solved variables. For the original example at hand, you can generate 3 cdfplot()'s, one for each solution (x, y, z). These will be your reported solutions +/- the confidence interval of your choosing (generally 1, 2, or 3 sigmas).

There are a number of other nuances to write about, considerations, yadda, yadda, but this should be a good jumping off point. Have fun!

Archived entries

posted Nov 3, 2014, 10:07 AM by Ted Bibby

Arduino Climate Station

posted Nov 3, 2014, 9:39 AM by Ted Bibby   [ updated Nov 13, 2014, 1:33 PM ]

I wanted to learn about Arduino's and see if I could make a cheaper climate station using some anemometers I had laying around. The economically priced Arduino (~$20) may prove to be a useful data logger for multiple systems in the future.

  • Arduino Uno
  • Met One 010C Wind Speed Sensor
  • Met One 020C Wind Direction Sensor
  • 12 V power supply

In brief, after finding versions of code floating around the internet I was able to program my Arduino to show a real-time display of wind speed and direction as simulated in my office. The wind direction sensor was very easy to calibrate and decipher the output signal from the device because the output voltage signal from the Met One instrument was directly correlated to direction/position of the wind vane. I have my wind vane set to output a range of 0-2.5 volts for 0 to 360 degrees rotation. There is also an option to change the voltage output form the wind van to 0-5 volts depending on your requirements. The code below should be adjusted as such depending on your configuration.

The output signal from the wind speed sensor was much more difficult to calibrate and get working properly. I tried to compare my reading to a handheld kestrel and feel confident that the same reading are recorded but I did not have a standardized way of generating a constants wind speed and so the reading the from kestrel varied depending on it's position and the rotation the the met one anemometer cups was only a function of one side of the instrument. The computer fan I rigged up to rotate the sensor blew unevenly.

I wired in a back-lit LCD display with a pontentiometer knob to control contrast of the display. I used another pontentiometer to control the output voltage coming from the speed sensor. The speed sensor is designed to output a 12 volt pulse proportional to the rate of rotation. This was the tricky part. I used the pontentiometer to reduce the voltage coming form the sensor because the Arduino is not designed to receive 12 volt signals. It probably won't hurt it, but I didn't want to chance it. The other key piece of information here is the conversion from pulse rate to an actual wind speed (distance/time) and required me calling Met One (which has great customer service) and getting them to send me the technical manual for the speed sensor. I'll attach that information below.

// Direction, Speed and LCD

// include the library code:
#include <LiquidCrystal.h>
// initialize the library with the numbers of the interface pins
LiquidCrystal lcd(12, 11, 5, 4, 3, 7);
const byte BUTTON = 2;
float elapsed, diff, start;
float windspeed;

// the setup routine runs once when you press reset:
void setup() {
  // initialize serial communication at 9600 bits per second:
  digitalWrite (BUTTON, HIGH);  // internal pull-up resistor
  attachInterrupt (0, pinChange, RISING);  // attach interrupt handler
  start = millis();
   // set up the LCD's number of columns and rows: 
  lcd.begin(16, 2);
  // Print a message to the LCD.
  //lcd.print("Deg   -&-   Mph");
// Interrupt Process
void pinChange ()
  elapsed=millis()-start;         //gets the full period (in ms) of rotation 
  windspeed=(1.789*(1/elapsed))+1;//2.5/(elapsed/1000);   //gets freq from ms then * by 2.5 to get mph (according to hardware spec)
  start=millis();                 //resets the counter

}  // end of Interrupt Loop

// the loop routine runs over and over again forever:
void loop() {
  // read the input on analog pin 0:
  int sensorValueDir = analogRead(A0);
  int sensorValueSpd = analogRead(A5);
  // Convert the analog reading (which goes from 0 - 1023) to a voltage (0 - 5V):
  float voltageDir = sensorValueDir * (2.5 / 1023.0);
  float voltageSpd = sensorValueSpd * (12 / 1023.0);
  // convert coltage to degrees, 0 & 360 = North, 90=East, 180= South, 270=West
  float degree = voltageDir * (720 / 2.5); 
  // print out the value you read:
  // create a string of variables
 int MyPrintln(float voltageDir, float degree, float voltageSpd, float windspeed);
// print the variables in that string
  Serial.print(" Volts_Dir  ");
  Serial.print(" Degrees     ");
  Serial.print(" Volts_Spd ");
  Serial.print(" Mph ");
// print to LCD screen

  // set the cursor to column 0, line 0
  // (note: line 1 is the second row, since counting begins with 0):
  lcd.setCursor(1, 0);
    // print the wind vane degrees:
    // print the wind vane degrees:
  lcd.print("  Mph");
  // set the cursor to column 0, line 1
  // (note: line 1 is the second row, since counting begins with 0):
  lcd.setCursor(1, 1);
    // print the wind vane degrees:
    // print the wind vane degrees:
  lcd.print(" Deg.North");
Met one 010C user manual is attached below. Page 6 has the transfer functions required to convert pulse rate to distance/time.

1-6 of 6