Low Rate Peristaltic Pump
From J. Arrizza Wiki
|Previous ⇦ jTestDft||Main Page ⇫ Up||Arduino ⇨ Next|
- 1 Objective
- 2 (Current) Stepper Motor with Peristaltic Head
- 3 The Scale
- 3.1 (Current) Acculab/Sartorius Scale
- 3.2 Ohaus Scale
- 4 12VDC Motor with Peristaltic Head
I have been trying to build a very low rate pump using off-the-shelf parts. The goal is to be able to run at rates from 0.1 mL/hr (or lower) to 1000 mL/hr( or higher).
The discussions below are some of the configurations, hardware, software and other problems I have faced. It is in reverse chronological order, so the most recent attempts are immediately below this point.
- calculate formula and test, see #Translate PPS to Fluid Rate
- DONE Do a sample run #Initial Test Results
- DONE run some sample tests to get sample data
- DONE graph the current data
- DONE use GSL https://www.gnu.org/software/gsl/ to calculate the linear regression formula coefficients
- update arduino controller to use longer time base and more slots (to get lower possible rates) #To Go Slower
- delete current test data and rerun tests with the new setup to recalculate pps to flowrate coefficients
- add the linear coefficients to the motor control
- run tests to gather a series of expected flow rate vs actual flow rate
- DONE Do a sample run #Initial Test Results
- DONE set up a high resolution scale, see #(Current) Acculab/Sartorius Scale
- DONE set up a Stepper motor, no feedback, with a peristaltic head, see #(Current) Stepper Motor with Peristaltic Head
- DONE initially used a low resolution scale, see #Ohaus Scale
- DONE set up a simple 12V motor, no feedback, with a peristaltic head see #12VDC Motor with Peristaltic Head
The strategy I'm using is straightforward:
- set up the pump and conroller to be able to run at a wide range of speeds.
- over a wide range of control values, calculate the actual fluid rate
- use linear regression to find a formula that translates from the actual fluid rate to a control value, e.g. I want 2mL/hr, therefore I send "45" to the pump controller.
- use that formula in the control software
- confirm all is ok by retesting the overall system
Step 1 is easy. The various pump controllers have a speed mechanism and they take a control value of some kind to allow us to change the speed of the pump. Sometimes it is just a raw number (say from 0 - 255) or it could be a voltage or some other electrical value or signal (e.g. pulses per second).
Step 2 is hard. It's complexity is hidden in the words "actual fluid rate". See below for more.
Step 3 is easy. The regression formula can be 1st order or higher to get a nice low R2 value. It could incorporate other variables in the future, e.g. time elapsed, fluid pressures, etc.
Non-linearity in the motor control or in the pumping mechanism can be smoothed out (up to a point) with higher order regressions. Note there is always the caveat of over-fitting to the data!
Step 4 is easy. As always, I need to watch out for floating point arithmetic errors and glitches.
Step 5 is easy - if we've solved the problems in Step 2!
Since this project is just to satisfy my curiosity, I probably won't be doing these extra tasks but you never know...
- Testing across multiple motors, motor controllers and pump heads is probably a very good idea!
- Investigate the idea of adding a calibration facility. This could be another formula that tweaks just a little for each individual pump.
- Test the extremes.
- What is the fastest rate?
- What is the slowest rate?
- Test the exceptions:
- What happens if the fluid reservoir runs out?
- what happens if the tubing isn't primed correctly?
- What happens if you clamp an input or an output line?
- What if the power supply voltage isn't quite correct?
(Current) Stepper Motor with Peristaltic Head
I found a peristaltic pump with a Stepper Motor on Amazon:
I bought one for just under $50; since then it as been marked as "Currently unavailable".
The motor controller is an EasyDriver Stepper Motor Driver for $15.
I hooked this up to an Arduino Mega:
See below in section #The Scale for a more extensive discussion of problems with the scale.
The end result of all the testing is to acknowledge that the scale will have problems dealing with extremely low rates. There may be problems even if normal statistical techniques are used to eliminate noise in the readings.The noise is not consistent and may not even be random or normally distributed.
The work around is ... TBD. I believe my best strategy at this point is to establish the linear regression formula at higher rates and then use that for all rates. Then perform as much testing as I can at the lowest rates.
My reasoning is this: the pumping mechanism's response to a single stepper pulse should be the same across all rates. It should pump the same amount of fluid whether it is a stream of pulses or a single pulse within a time span of a second. However, this may not be true and the only way to determine if it is or not is to have accurate test results at all rates. If I can test at very low rates, then I may get lucky and gather some data from a good run or two. That will help! But given the limitations of the scale and my test environment (my kitchen table), well, at this point, this is the best I can do.
I have been converting to Jetbrain products from Eclipse and Netbeans IDEs. I used the CLion IDE from Jetbrains https://www.jetbrains.com/clion/ and installed a couple of plugins:
- Serial Port Monitor - allows you to see UART output from the Arduino within CLion
- Arduino - allows you to write and compile Arduino code from CLion
First, I wrote a bit of code for the Arduino to ensure the controller worked ok. All it did was pulse the controller pin and then sleep for 500ms. That double checked these were functioning ok:
- all wiring to the correct Mega GPIO pin
- compilation, download, etc. and the Mega Arduino itself
The motor "ticked" every half second as expected. I tried faster speeds and the system behaved as expected. This test code setup was done in the main loop
Next I needed to check that this pulsing could be done in an Interrupt. I created an ISR (interrupt handler) based on a 1ms timer which incremented a counter. Every 500 counts, it would tick as the stepper was pulsed, again the system behaved as expected.
I also checked the timer. I toggled another GPIO line every interrupt and checked the frequency on that pin with an oscilloscope. It matched the expected frequency within a few decimal places.
I wanted to be able to set the pulse rate from a Ruby script over a Serial Port/UART line I wrote some test code to simply receive a few bytes over the UART line, add a couple of other bytes and respond back to the sender. I wrote the sending code in a Ruby script and the communication worked as expected.
At this point, all the pieces were in place and I was sure they were working correctly.
The overall design is simple.
- In setup()
- initialize the IRQ timer to run every milli-second
- initialize the UART
- initialize a global array with 1000 entries to 0s; it holds the pulse count for every ms
- initialize a global index into the array
- in ISR:
- increment the global index; if past the end of the array, set to 0
- pulse the Stepper GPIO pin with the number of pulses (0 - 255) in the array
- in loop()
- If there is an incoming character on the UART
- read the incoming character string until the end of the packet; this holds the total number of pulses to set up per second
- check for a valid Pulses Per Second (pps) value
- if valid, distribute those pulses evenly throughout the array
The distribution code was a little tricky until I realized I could use Bresenham's formula https://en.wikipedia.org/wiki/Bresenham's_line_algorithm. To see this in action for a simpler Arduino setup see ArduinoAccuratePulser. There is some additional discussion about the algorithm here ArduinoAccuratePulser#Modified_Bresenham.27s_Line_Algorithm
This algorithm is normally used to draw a line on a screen, but at a more fundamental level it takes a continuous formula and converts it into a discrete ("pixelated") version that minimizes the overall error. Each individual pixel will not match the expected continuous number contributing to the overall error. But the algorithm minimizes the total of all of those error values. In short, the smaller the overall error is, the "smoother" the line looks on the screen.
The idea then was to take the total number of pulses, divide it by the number of slots and then use Bresenham's formula to minimize the total of the error values for each slot. For example, say the total number of pulses is 100. Then 1000/100 = 10.0 and so every 10th slot would have a 1 in it, the rest would have 0's. In this case it is a nice fit, but oddball numbers aren't as lucky. For example, say the total pulses is 993. Then 1000/993 = 1.000705. So most of the slots are 1, but every so often there is a 0 (for a total of 7 in the entire array).
The algorithm required some minor changes to work for my purposes, but eventually it did distribute the pulses evenly across the 1000 slots. It worked fine whether the total number was low (e.g. 1, 2, 3...) or high (e.g. ..., 997, 998, 999). I double-checked against smaller arrays (e.g. 10 elements) and it worked fine there as well. And finally I double-checked against arrays with a prime number of elements (i.e. the worst case) and it worked fine again.
I started it up and since the array was initially all 0's, no motor activity occurred.
I send it a 1 for the total number of pulses. I expected to see the pump move/step once a second. I could not see it move, but I could feel it "tick" regularly, once a second. Note this is the slowest speed I can run the motor at this time, see #To Go Slower.
I tried 100 and the pump moved slowly but visibly.
I tried various numbers all the way up to 3000 or so, at which point the pump stalled. I checked the EasyDriver Controller doc and on it's board there is a current limiting pot. By playing with it (i.e. giving the motor more current), I was eventually able to consistently go up to 6000 pps without the motor stalling. Note the 6000 pps is about 2200mL/hr and so is faster than my goal.
Initial Test Results
I modified my script that does the test runs to automatically distribute tests across as wide a range of PPS values as possible. I ran that script and gathered a bunch of test data. It saves it in a YAML file.
I then wrote a script to analyze the test data. It reported these ranges:
Flow Rates : 214.40573840907038 - 2615.660888644071 Pulse Rates: 600.0 - 6000.0
I then added some analysis to check the various test points:
pps=1000 flowrate=420.7804556629142 pps=1000 flowrate=423.32183609541386 pps=1000 flowrate=419.477884396956 pps=1000 flow rate: avg= 421.2 stdev= 1.6 var= 2.5 pps=2000 flowrate=868.612336820135 pps=2000 flowrate=856.8257377116644 pps=2000 flowrate=949.2405405783727 pps=2000 flow rate: avg= 891.6 stdev= 41.1 var= 1686.7 pps=3000 flowrate=1271.3429369952762 pps=3000 flowrate=1256.4792837666005 pps=3000 flowrate=1264.644055545798 pps=3000 flow rate: avg= 1264.2 stdev= 6.1 var= 36.9 pps=4000 flowrate=1752.8973565032595 pps=4000 flowrate=1748.6259648081364 pps=4000 flowrate=1747.6591175461313 pps=4000 flow rate: avg= 1749.7 stdev= 2.3 var= 5.2 pps=5000 flowrate=2145.3437171767737 pps=5000 flowrate=2142.747967244612 pps=5000 flowrate=2112.73950316454 pps=5000 flow rate: avg= 2133.6 stdev= 14.8 var= 218.9 pps=6000 flowrate=2582.987955450938 pps=6000 flowrate=2133.873433862267 pps=6000 flowrate=2016.7974988937865 pps=6000 flowrate=2615.660888644071 pps=6000 flow rate: avg= 2337.3 stdev= 265.5 var=70487.9 pps=800 flowrate=339.3260210637104 pps=800 flowrate=337.11389189229425 pps=800 flowrate=339.6241870029783 pps=800 flow rate: avg= 338.7 stdev= 1.1 var= 1.3 pps=700 flowrate=297.32804103160066 pps=700 flowrate=296.6882582660865 pps=700 flowrate=295.4479627195224 pps=700 flow rate: avg= 296.5 stdev= 0.8 var= 0.6 pps=750 flowrate=314.7873592318074 pps=750 flowrate=318.0895512112209 pps=750 flowrate=315.0316299789516 pps=750 flow rate: avg= 316.0 stdev= 1.5 var= 2.3 pps=900 flowrate=379.64984890564966 pps=900 flowrate=377.24904608189496 pps=900 flowrate=381.2918206629654 pps=900 flow rate: avg= 379.4 stdev= 1.7 var= 2.8 pps=1500 flowrate=630.7612611581272 pps=1500 flowrate=629.6932758606342 pps=1500 flowrate=632.0642923625697 pps=1500 flow rate: avg= 630.8 stdev= 1.0 var= 0.9 pps=2500 flowrate=1066.235696078983 pps=2500 flowrate=1069.8607691426694 pps=2500 flowrate=1077.8802506769603 pps=2500 flow rate: avg= 1071.3 stdev= 4.9 var= 23.7 pps=3500 flowrate=1500.126087707016 pps=3500 flowrate=1499.2349016840972 pps=3500 flowrate=1487.5333209516668 pps=3500 flow rate: avg= 1495.6 stdev= 5.7 var= 32.9 pps=4500 flowrate=1949.6369633799861 pps=4500 flowrate=1958.4553248296286 pps=4500 flowrate=1955.755010815901 pps=4500 flow rate: avg= 1954.6 stdev= 3.7 var= 13.6 pps=5500 flowrate=2351.1999069038375 pps=5500 flowrate=2377.0546723227653 pps=5500 flowrate=2346.0663460539377 pps=5500 flowrate=2384.2089006846 pps=5500 flow rate: avg= 2364.6 stdev= 16.3 var= 265.7 pps=600 flowrate=214.40573840907038 pps=600 flowrate=249.72087920847153 pps=600 flowrate=250.77452851304736 pps=600 flow rate: avg= 238.3 stdev= 16.9 var= 285.7
The first few lines of every section shows the actual pps and flowrate data. The final line shows the average flow rate, the standard deviation and variance. There are a couple of values that jump out:
- pps = 6000, the variance is very high
- pps = 2000 seems to have a high variance as well
You can clearly see the anomaly in the pps=6000 data. The pps=2000 data is a little more subtle in that the line "veers" off to one side a little.
The coefficients are displayed on the graph too. My script actually calculates multiple regression formulas with order 1 - 3 and displays the best line on the same graph in green. Here are all of the coefficient sets it found:
fit where x is rate, formula returns f(x)=pps to use Note: lower chi squared is better Sample: for flowrate=297 expected pps=700 order: 1 coefficients: [ -8.219e+00 2.376e+00 ] chi squared : 2698848.495 valid: 0 sample x= 297 f(x)=698 expected=700 order: 2 coefficients: [ -1.072e+01 2.383e+00 -2.424e-06 ] chi squared : 2698790.255 valid: 0 sample x= 297 f(x)=697 expected=700 order: 3 coefficients: [ 2.522e+02 1.365e+00 8.953e-04 -2.183e-07 ] chi squared : 2528484.682 valid: 0 sample x= 297 f(x)=731 expected=700
The "sample x=" is just a simple calculation I did using actual data from the tests. The idea is that it is a quick and dirty double check. The flowrate of 297 (x value) should result in an expected pps of 700. The various coefficient sets get pretty close.
These calculations were done by the GNU Scientific Library and the matching ruby gem gsl. I also used the ruby gem descriptive_statistics to do the stddev and mean calculations:
# for the GNU Scientific Library: # gem install descriptive_statistics # sudo apt-get install gsl-bin libgsl0-dev # gem install gsl # # for gnuplot # sudo apt-get install gnuplot-x11 # gem install gnuplot # # ============================================================== require 'descriptive_statistics/safe' require 'gsl' require 'gnuplot'
You can see I used gnuplot to do the graphing. I had originally used imagemagick but the resulting graph wasn't as clean:
# sudo apt-get install libmagickwand-dev imagemagick # sudo gem install imagemagick # require 'rmagick'
To Go Slower
With the current setup 1pps is roughly 0.45 ml/hr. To go slower than this rate is simple: pulse at a lower rate.
There are two main ways that can be achieved:
- slow down the ISR timer
- use a longer array
Instead of iterating through the array every millisecond, process it every 10 or 100ms. This is quite easy to do by changing the IRQ parameter that initializes the timer. Note that the timeout does not have to be a multiple of 10 or an even number at all. It simply has to be a very regular, consistent interruption. The only caveat is that at very long timer values, the pulse train going to the stepper could "jitter" and not be as smooth as possible. My guess is that anything more than 100ms or so is the maximum value for the timeout. This translates into a flow rate that is roughly 100 times slower than I can get (0.45) right now: 0.0045 mL/hr.
Using a longer array is probably the better option. Right now the array length is 1000 entries. With 10,000 entries, I could achieve a flow rate which would be 10 times slower. The problem is that the Arduino Mega only has 8k RAM available. The max size of the array then is slightly less than 4K bytes since:
- there are other variables in the code that take up some memory
- when an incoming command comes in, I fill in a 1K buffer and then if all is well, I copy that entire buffer over to the array, i.e. I need twice the RAM for a given array size
In reality, after all is said and done, I will probably bump the array to 3K elements and also slow down the timer to 3ms or so. That gives me a (3 * 3 == 9) factor slower pulse rate, that is 0.050 ml/hr which is just under my goal of 0.1 mL/hr. I could slow down the timer to 30ms and thereby achieve (3 * 30) factor which is 0.005mL/hr and that is under the longer term goal of 0.01 mL/hr.
In all of this, I do need to worry about the higher end. The array elements are bytes, therefore the max value in each element is 255.
Translate PPS to Fluid Rate
At this point, I have a system that accurately and fairly precisely pulses the stepper at a given PPS rate. But the User of this system is more interested in fluid rate, not pulses-per-second. This brings up two questions:
- how can I derive a formula that translates from expected fluid rate to PPS
- how accurate is that formula?
The simplest way to get a formula is to perform a series of tests and use linear regression to get the best fit formula to translate from expected Fluid Rate to PPS. The methodology is:
- Do the following for a significant set of samples across the entire range of PPS available (1 to 6000 PPS say)
- run a test at that PPS
- measure the actual fluid rate using a high accuracy scale
- Add all data points to a spreadsheet, i.e. actual rate vs PPS value
- Calculate a linear regression formula and the R2 value for the data; check the R2 value indicates a good, close fit
- Double-check the formula:
- enter an expected rate into the formula
- send the resulting PPS to the motor
- measure the actual fluid rate using a high accuracy scale
A few notes based on the methodology above:
- "significant set of samples"
- in short the more the merrier. But note that at low rates the tests take a long time so there is a trade-off.
- "measure the actual fluid rate"
- this is quite difficult at rates below 5mL/hr or so see #The Scale
- linear regression fit
- could use 1st, 2nd or higher order linear regression. Use R2 to choose one. Watch out for over-fitting.
- Calculate fluid per pulse
- one available double-check is to calculate the fluid per pulse at the various rates. For each motor pulse, the peristaltic pump turns very slightly but roughly the same amount each time. Averaged out across a few thousand or more pulses, that value should be consistent. In my case it was roughly 115 - 119uL/pulse measured over rates ranging from 1 mL/hr to 2200 mL/hr.
- Multiple motors
- Should test across multiple motors. Right now it is, after all, a sample size of one.
- Multiple pump heads
- Should test across multiple pump heads. Different heads may have different roller mechanics
- Multiple pinch tubing
- should test across multiple samples of the tubing used in the peristaltic pump.
- Multiple Arduinos
- Should test across multiple Arduino boards. The IRQ timer consistency is important. If it is interrupted every 1.003ms say, it should be that value across multiple Arduino boards. E.g. the crystal oscillator used to set the Arduino clock may not be accurate/precise across different boards
(Current) Acculab/Sartorius Scale
The first scale an Ohaus (see # Ohaus Scale) had some problems because of it's low resolution.
To overcome these I bought another scale, an Acculab that has a 0.001g resolution up to 300g max value. https://www.amazon.com/gp/product/B00RTL0CFQ/ref=od_aui_detailpages00?ie=UTF8&psc=1
This scale is a 100 times more sensitive then the Ohaus. In fact, it is so sensitive that air currents can change the reading. From a couple of feet away, blowing towards the scale can cause it to change by 0.050g or more. Closer, it can change up to 0.4g or more.
The scale comes with a cover, a "windscreen", which reduces the effect substantially. However, the cover has a hole in the top, so it is still possible to make it change by -0.002g or so. Note that the change is negative. The reason is that air blowing over the hole causes the Bernoulli Effect, i.e. it creates a vacuum and the scale registers the force of that vacuum. https://en.wikipedia.org/wiki/Bernoulli's_principle
In short, there could be very slight changes in the reading simply by air currents (e.g. furnace or A/C turning on) surrounding the scale. To get a better understanding of how big an effect this is, I set up a test where I took a reading every second for 2 hours and I counted the number of times the scale was non-zero. Since there was nothing on the scale and since I tared it at the start of the run, the expectation is that of the 7,200 readings, they should all be 0.000g. There were in fact, 367 readings that were non-zero, i.e. 5.1%. They were mostly +/- 0.001g and a dozen or so at +/- 0.002g. I did not calculate a standard deviation for these readings.
Some of these non-zero readings may have been from noise within the scale itself. Again, I assume that that it samples it's sensor(s) multiple times for each displayed reading and that the sensor has noise.
Another problem was evaporation. When there was fluid in the receptacle and the pump stopped, the readings would drift slowly downwards. The best explanation for this was evaporation.
To informally test for this, I took an open faced receptacle and filled it with water until it read 100.000g (or so). I took a reading every 10 - 20 minutes (literally on the back of an envelope) and kept track for a couple of hours. The windscreen was up, so this may have had an effect on some of the readings. The readings dropped substantially, roughly 0.040g to 0.060g per hour. Note there were rough measurements and guesses, so this is not completely accurate, but it did account for the change in readings.
To double-check, I re-ran the same test with the same receptacle but this time with at lid tightly closed on it. Informally, the readings were very stable, hovering around +/- 0.001g as they normally are. In short, the slow drift downwards is fluid evaporation.
Small Petri dish using a script
To double-check even further, I wrote a script to run the evaporation test automatically, so I did not have to trust my note taking capabilities. This time the receptacle was a small petri dish about 70mm in diameter. It has a cover, but for this test, I left it off. I ran a script which took a reading every minute for 5 hours. The weight readings dropped from 30.624g to 29.428g over the 5 hours, a total of 1.196g drop which is 0.239g per hour average rate. I plugged those values into a LibreOffice spreadsheet, plotted it on a graph and calculated the linear regression formula for the data. The slope of the line is -5.8819E-06 which indicates that the evaporation rate was 5.89 micro-grams every second or about 0.021g every hour. This is less than when I did the same measurements informally but in the same ballpark.
I ran the test again. I refilled the petri dish and put its cover on it. This time the readings dropped from 34.823 to 34.763g, a total of 0.060g which is 0.012g per hour average rate. The linear regression slope is -3.1958E-06, which is 0.011g every hour. The is less than the uncovered evaporation rate, but still relatively high - roughly 10% - in relation to a fluid rate of 0.1mL/hr.
Large Petri dish using a script
I redid the tests with a larger petri dish. To make the graph look a little nicer, I modified the output to print every minute instead of every second. Note there could have been changes in these readings as well because of ambient temperature and humidity levels.
Uncovered, the weight readings dropped from 52.545g to 52.072g over the 5 hours, a total of 0.473g drop which is 0.0946g per hour average rate.
The slope of the line is -1.506-03 which indicates the evaporation rate was 0.015g (15mg) every minute or about 0.090g (90 mg) every hour. Note the R2 value is 0.999 which indicates that the linear slope of this line is a very tight fit. And also note that this rate (0.090g/hr) is faster than the small petri dish rate of 0.021g/hr. This is expected because of the larger surface area of the new petri dish.
Covered, the weight readings dropped from 58.842g to 58.816g over the 5 hours, a total of 0.026g (26mg) drop which is 0.005g (5mg) per hour average rate.
The slope of the line is -4.956E-05 which indicates the evaporation rate was 4.95 micro-grams every minute or about 0.003g (3mg) per hour average rate. Note the R2 value is only 0.207 which indicates the line is not a good fit. This is most likely due to random noise in the readings. And also note that this rate is substantially less than the uncovered rate, as expected.
Large Petri dish with a small hole in the cover
To be able to drip water into the Petri dish, I drilled a hole in the cover. I reran the test once more, with the cover (with the hole in it) on the petri dish. This should get the evaporation rate of my final setup.
In this case, the weight readings dropped from 36.952g to 36.898g over the 5 hours, a total of 0.054g drop which is 0.011g per hour average rate.
The slope of the line is -0.000174 which indicates the evaporation rate was 171 micro-grams every minute or about 0.010g (10 mg) every hour. Note the R2 value is 0.961 which indicates that the linear slope of this line is a tight fit.
I did one test, were I put a metal weight (100g) on the scale and read its weight over 5 hours. The expectation is that it would weigh 100.000g over the full 5 hours since there was no evaporation or other effects on the scale.
The results indicate an increase in the weight readings of 0.017 (17mg) over the course of the test. The R2 value was 0.47 which indicates the line is not a good fit, which is expected. The "slope" of the line is 2.837E-005 which indicates a change of 28.37 micro-grams every minute or about 0.14mg per hour.
Here's another run
This shows a nice flat run, except for a 0.600g spike that occurred around 100 minutes into the test. The R2 value shows as 0.004 which clearly indicates a bad fit.
The input of the pump is connected to some tubing which I placed into a plastic reservoir and filled with water.
The output of the pump is also connected to some tubing. I strapped that to the outside of the receptacle - a plastic bottle. I drilled a hole in top/side of the bottle and fed the tubing through the hole. As the fluid came out of the tubing, it would drip into the side/top of the bottle and be measured by the scale. The bottle has a cap to minimize evaporation.
Periodically, I had to empty the bottle. I would take the bottle off the scale, take off the cap and pour the water back out into the reservoir. And then put the cap back on and place the bottle back on the scale.
I noticed that the scale value would change quite a bit at this point. Sometimes as much as a gram or two. Eventually this was traced to the uncoiling of the tubing which put various forces against the length of it and therefore to the bottle. Those forces were causing the scale readings to change.
To test this, I emptied the bottle and just let the scale sit there. I waited for the scale to stop moving and periodically (informally!) wrote the readings down. The scale continued to change over the course of a day and eventually stopped moving on the second day.
The tubing is not pliable (see silicone tubing for model airplane fuel tanks http://www3.towerhobbies.com/cgi-bin/wti0001p?I=LXK129). To overcome this, I did this: - put brass tubing into the silicone tubing - built a stand that holds the tubing - the brass tubing outlet is held over the hole in the bottle by the stand
One downside is that the hole in the bottle leads to higher evaporation rates.
Another is that the initial setup is more complex. The brass tubing has to be precisely over the hole in the receptacle. To help with this, I bought some small petri dishes. Instead of using a bottle, I will use a petri dish that fits inside the windscreen of the scale. I will drill a hole in the petri cover just barely larger than the brass tubing. This should be marginally easier to set up.
When the pump is on, fluid comes out of the end of the tubing. It accumulates into a drop until it weighs enough to fall into the receptacle. In other words, that fluid is "delivered" but not weighed -- for a while. Then it finally drops, causing the reading to suddenly "jump".
At high rates, this isn't a problem. The drops are more or less continuous and the time lag is not significant.
For very low rates, this is a problem. The time lag between delivery and measurement potentially causes the calculation of the fluid rate to be off.
Also analysis of "fluid delivery continuity" is probably impacted. There is a expectation that the fluid delivery is even and constant across any time span. At low rates it can be step-wise (e.g. jumping upwards periodically) to some extent but it is important to make that delivery as smooth and as even as possible.
And finally, to a lesser impact, the surface area of the drop causes more evaporation than necessary.
To overcome this, I will put a small copper wire into the end of the tubing. The idea is that the wire breaks the surface tension of the drop and the fluid should flow down the wire into the receptacle. A couple of caveats:
- If the wire is too small, it doesn't break the surface tension.
- If the wire is too large, it becomes another spot for the drop to accumulate.
TBD I need to experiment to see if there's a better way...
The first scale I used, an Ohaus, had a resolution of 0.1g and a max reading of 400g. https://www.amazon.com/gp/product/B00HJDUBIC/ref=oh_aui_detailpage_o05_s00?ie=UTF8&psc=1
Low Resolution Effects
At very low rates, the expected behavior is for readings to smoothly increase from one value to another as fluid accumulated in the receptacle on the scale. The low resolution would not cause any problems and the readings would change smoothly and accurately relative to the rate of fluid flow.
But in fact the readings would not change smoothly. It would sometimes change too soon and "jump" to the next value. For example, say it was reading 1.3g. At a particular control value (e.g. a control value of 22, or N pulses per second) the expected rate was 0.1g/hr, and therefore the expectation was also that it would take an hour to change from 1.3g to 1.4g. But because of the scale resolution of 0.1g, a displayed reading of "1.3g" could actually be anywhere from 1.30g to 1.39g, or even slightly outside this range. An additional drop of water could cause the reading to not change at all or to suddenly change to the next value 1.4g. Therefore the actual amount of time to reach a given amount of fluid could be much shorter than the expected time.
This caused havoc in calculating the actual rate. I was calculating the rate using the time it took to see an expected reading. For example I would run the motor at some control value, timing how long it took for the scale to change from 0.0 to 1.0g say. If that time was 1 hour, then the control value represented an actual rate of 1 mL/hr. If the time was 2 hours, then the actual rate was 0.5 mL/hr. For the actual rate to be accurate, both the time and weight measurements have to be accurate. If the scale reading "jumped" too soon, the calculated rate could be substantially off. Since the time for the scale reading to change could vary a lot, the calculated rate could vary a lot as well.
Another problem is that the value would change upwards (as it should) and then dip back down, toggling back and forth for a while until it finally stabilized and resolved to the next higher value. For example, if the reading was 1.3g, an additional drop of water would cause the reading to move to 1.4g but a few seconds later it would read 1.3g again. Another drop of water would cause the reading to move to 1.4g again and it would stay there a bit longer but still drift back to 1.3g again. Eventually a final drop of water would cause the reading to move to 1.4g permanently. This behavior makes sense if, internally, the scale was not taking a single sample of its sensor(s) but was taking many samples and presenting a moving average to display a reading. If there was signal variation (aka noise) from that sensor, the external display would drift slightly if the weight was close to a 0.1g boundary.
I added a bit of "hysteresis" to the test software to help overcome this: I had to see the same scale value for 3 contiguous readings for it to be actually at that value. I tried different counts (up to 10) but 3 seemed to be ok. I also added a short delay after I stopped the pump to allow the scale reading to settle down. Neither of these are great solutions and could cause additional inaccuracies, but those inaccuracies are most likely smaller than not having the workarounds in place.
Another troubling problem that occurred is that the reading would suddenly vary. For example, if it was reading 1.3g for a few minutes, it would suddenly shift downwards a few tenths of a gram, say to 0.7g, for 10 - 20 seconds and then rise back up to its previous value of 1.3g.
More rare was at times it would rise up temporarily to a higher value for a time and then fall back down to its previous value. This caused some issues if it was close to the end of the run. The run would terminate too soon and the rate calculation would be substantially off.
12VDC Motor with Peristaltic Head
I found this 12V DC pump with a peristaltic head on Amazon for less than $20.
It is a simple 12V DC motor, no encoder or other feedback is available.
The peristaltic head has 3 rollers that are rotated by friction via the motor's shaft. The pump has some flexible tubing that wraps around the rollers. The tubing and rollers are in a hard plastic outer case.
When the motor turns the shaft which cause the rollers to rotate. This action pinches the tubing against the outer case. The fluid in the tubing is trapped in the spaces between the rollers. Since the tubing does not move (except to get pinched) the fluid is pushed through the tubing.
The seal between the rollers, tubing and outer case is quite good. It can self-prime, in other words, it can pump air and the negative pressure on the incoming inlet is sufficient to pull water from the reservoir.
The motor shaft is chrome so there is a high potential for slip.
The motor controller is a Pololu Jrk 21v3 USB Motor Controller
A control value from 1 - 255 is sent to the controller via a USB port and then DC motor runs at whatever speed it goes at - more or less proportional to the voltage applied to it by the controller. A control value of 0 means stop and the controller also has a special stop instruction. Negative numbers can be used to cause the motor to reverse direction.
This controller can run open loop or closed loop. Since the pump had no feedback mechanism (i.e. no shaft encoder) it was run open loop. A control value was sent to the controller which turned the DC motor at a particular speed. There was no mechanism to detect that the motor was turning at that speed or even if it was turning at all.
Lack of feedback caused some problems. For example, at very low speeds the motor stalled and there was no way for me to detect that condition within the control software.
The controller has a serial interface that uses a binary (non-Ascii) serial protocol. This caused some small problems since I used Ruby to talk to it, but they were easily overcome using the correct serial IO calls.
I used an Ohaus SP401 scale that can display 0.1g resolution up to 400g capacity.
This scale is solid and works very well. It comes with a weight to calibrate and was accurate as far as I can tell.
I also picked up a serial port interface for it and used that to read values and tare the scale. The serial protocol is simple using some text commands and responses.
The control software is straightforward. It was all written in Ruby running on Ubuntu (Linux).
Opening and closing the ports was straightforward. Over time, I found that there was some internal buffering going on in the scale, and so there could be garbage characters on the first read. To clear those buffers, after initialization, I would read the port until Nulls were returned.
The goal for the control software was to be able to send down a rate, say 100 mL/hr, and the correct control value would be chosen and sent to the pump. There were two steps done to achieve this:
- find out what the actual fluid rate is for a given control value
- create a function that, given the requested fluid rate, returns the control value to command the pump
A series of tests were done to determine what the actual fluid rate was for a given control value (1 - 255). The data was accumulated over a wide range of control values. I tried to run at the same control values several times so that there was some redundant data.
I could not run the pump at very low values since the motor would stall. I also found that if the pump was left alone over night, the first few runs the next day would stall. This occurred more often at lower rates, but could fail even at higher rates. To overcome this, I "loosened up" the pump by running it for a few minutes at the highest rate possible. This is a mechanical problem, most likely caused by the choice of tubing material not being pliable enough.
These tests were done with a Ruby script which would:
- tare the scale
- choose a control value
- run until 10mL or more had been delivered
- read the scale
- calculate the actual rate in mL/hr
- save the control value and actual rate as a pair
Eventually I removed the taring of the scale and simply read the initial value of the scale. I also added delays between operations to allow the scale to "settle down".
Periodically the scale would suddenly dip by a 0.5 to 1.0g or so and then rebound to it's former value. Is there something "wrong" with the scale?
Calculate Rate to Control Value Function
Once I had the raw data, I used linear regression to find the coefficients for a polynomial function that translated a given rate (in mL/hour) into a control value. I used a spreadsheet to plot and graph the data. Generally speaking the control value to actual rate relationship was nearly linear. At the highest rate, the fluid rate dropped off slightly. I tried various types of fit, e.g. linear, 2nd order, 3rd order and higher, to see which was the best. LibreOffice (open source spreadsheet) can calculate the R2 factor and 2nd order gave the best fit.
Testing and Results
Then I wrote a Ruby script to test the polynomial. I chose some simple rates, say 100 mL/hr and ran it for a specific time, say 30 minutes. The scale should show, in this case, 50g (== 50 mL).
The initial runs were not good. I traced the problem down to some bad raw data, e.g. partial runs, runs done without the "loosening up", etc. I cleared out the raw data and re-ran the tests in a more controlled manner.
After re-calculating the coefficients, it performed quite well given the simplicity of the system. At various rates, it was accurate, at others it was off by as much as 10 - 15%. It was generally consistent too. Running a test, over multiple runs, at the same rate would give the roughly the same results.
- If I tried to run it too slow, the pump would stall, so the lowest rate turned out to be around 20 - 30 mL/hr
- the high end rate was very high 6000+ mL/hr was the max this particular pump could achieve
- at low rates the scale would not detect fluid rate changes so the data was not terribly accurate. i.e. running at control value 49 vs 50 showed no difference in the fluid rate.
- I did not test it for long periods of time; what happens when the tubing in the pump head wears out or becomes more pliable? What is the effect on the overall accuracy
- I did not test it with back pressure. The system would simply drip water into a jar on the scale, i.e. without any obstructions or constrictions to the fluid flow. Would having constrictions (e.g. a needle, or a narrowing of the tubing) cause any significant change to the actual fluid rate?
- I did not test it against more than one pump. There could be a substantial impact from pump to pump based on mechanical & manufacturing differences for the rollers, case, tubing, etc.
I believe that more data, and more accurate data, would have calculated a much better set of coefficients and I would expect the error to come down substantially.
Adding a motor encoder would make the system substantially more accurate as well as allowing for error detection (e.g stalling).