• Why not take a moment to introduce yourself to our members?

sihaya

Advanced Reefer
Rating - 0%
0   0   0
Not that I know of... I posted that "critique" awhile ago in response to some things Mr. Borneman said on the Reefland forum. Unfortunately, those threads are gone now (I don't know why. They never got ugly or anything). The forum on MARSH dedicated to the salt study is also gone. Not that any of this means anything... except that maybe it might be awhile before any more information comes out.
 
Rating - 99.1%
225   2   0
Hello all... great thread you have here. Did you all know it's one of the top results when you type in "salt study borneman lowe" into Google. Congrats :D

Here's my take on the salt study thread (note, it's becoming increasingly hard to find do to aggressive censorship measures--j/k ;) sorta). Though, please, before I say anything, let me insist that I'm not at all trying to attack Mr. Borneman or Ms. Lowe. I'm ONLY questioning the merits of this study. So here it goes...

Basics first:

First, let's talk a little about experiment design so you all can understand what's going on here:

The most basic kind of experiment design is where you look at one dependent variable responding/correlating to one independent variable. For example, let's pretend you're doing a study that looks at muscle growth (dependent variable) and steroid use (independent variable). You could take a 100 mice, inject half with steroids and half with saline solution, give them the same exercise routine and diet for 3 months, then measure their muscle mass at the end. You have to use a lot of mice for both the experimental group getting the steroids and the control group getting saline solution in order to minimize error due to differences among individual mice. To understand this more clearly... suppose you had only used two mice, one control and one getting steroids. At the end of the 3 months, you wouldn't be able to "trust" the results because you can't be sure that the mouse who got the steroids wasn't at a genetic advantage for muscle growth. I think everyone gets this basic idea, right?

Moving on...

So, what do you do when you don't have 100 mice? What if you only have two mice? Can you still do the study? Perhaps. You might be able to do a repeated measures study. What the heck is a repeated measures study? Glad you asked...

One of the most popular and well known repeated measures design is the pretest, posttest experimental design. For example, you can take the two mice, measure their muscle mass at the very start of the study, then weekly for 4 weeks. Then you inject both with saline solution and continue your measurements for another 4 weeks. Next you inject both with steroids and repeat weekly measurements for yet another 4 weeks. Because you're not comparing results of two different mice, but results over time at intervals on the same mice, you gain statistical power. Get it? Think about it for a sec, you will.

This of course, is not the only example of a repeated measures design. There are all kinds of these study designs. But the basic idea is the same... to test the individuals with different "treatments" over time. You tend to do this when you don't have enough subjects to separate into study groups as you would in a "normal" experiment.

Now, finally, about the salt study:

We have 10g tanks, one for each salt plus a control of natural sea water. We have one independent variable and multiple dependent variables measured over time. Now, first off, what kind of study does this look like? Does it look more like the first example I gave of having 100 mice or the second of having 2 mice? It kinda looks like a mix of both, right? Let's take a deeper look...

Statistically and conceptually, it looks very much like a classic experimental design flawed by having only one subject per variance of the independent variable (i.e. one tank per salt).
Note: "We show that there can be extreme variation among identical tanks, even without any live animals" - Toonen and Wee (http://www.advancedaquarist.com/2005/7/aafeature)

Mr. Borneman, however, would like us to think of this as being more like a kind of repeated measures study to be analyzed with ANOVA (a mathematical concept/model used to analyze this kind of data). Even being most generous with the boundaries of logic and reason, I could only accept this claim if the salt brands were consistent. But they are not. Again, as Mr. Borneman himself concedes, the salt brands are often inconsistent even between batches. So, even with all the power and forgiveness one can gain from a repeated measures study, it doesn't apply here because the batches of the sand brands weren't consistent and experimenters only made this inconsistency more pronounced by doing 100% water changes with each new batch of salt.

Now for how this study could have been done (in light of the statistical power afforded to some repeated measure study designs):

Instead of studying one salt in one tank, they should have studied all the salts in all the tanks... over time. For example, the experimenters could have started with natural sea water until the tanks were "cycled." Then every 3-4 months, changed the salt brand in all the tanks until all the tanks had seen all the salts for a period of 3-4 months (taking measurements of dependent variables at time intervals all along the way and with each change of salt brand). Granted, there are a lot of salt brands to test, so this could take a long time. However, they could have also split the tanks into groups of 5 and tested half the salts on 5 tanks and the other half of the salts on the other 5. Then they could have halved the time to do this kind of study.

The downfall of this proposed idea, and the problem with many repeated measures studies, is that the subjects can "fatigue" or "learn" over time. In the example given with the mice, the mice may have "bulked" up by the time they got the steroids, therefore perhaps limiting the additional effect the steroids might have. In this case, the tanks would be experiencing the salts at different ages... and that would be a problem. However, that would be a statistically manageable problem since all the tanks would be aging at the same time.

Ok, I could have more I could say, but this is a long post already. And I think I've made my point. I'm not being "close-minded" and my objections are not "non-sensical"... nor am I trying to embarrass/offend the experimenters. I'm simply looking at this study with a critical eye and right now it looks worthless.

I second your opinions about the design flaws of the experiment. I do not expect reliable conclusion can be drawn from the experiment. MR member Waterplant(statistics major) told me that the way the data is collected seem to have no meaning in statistics at all.

For hobbyists, not scientists, his piloting experiment will enhance the quality of future home grown non-scientific studies. After all, it's more a hobby then science to a lot of us who would like to use our simple observations to make our decisions(hardly scientific) but sometimes work quite good though.
 
Last edited:

sihaya

Advanced Reefer
Rating - 0%
0   0   0
Oh, btw, this is Mr. Borneman's response to this critique. It's no longer viewable to the general public. But fortunately, I was able to save a copy before it became accessable only to MARSH members:

Rob's experiment was not the same, nor was his design. His comments are valid, but not directly applicable. Whether or not what he found in terms of variability holds true for our entirely different experimental design remains to be seen.
That really wasn't my point in quoting Dr. Toonen's study. My only point with that was that even "identically" set up experimental tanks can show VERY different performance.

As mentioned, there is little understanding of the design or analyses of the study by the previous post by sihaya. No results have been presented, so no critique is possible, and when such results are presented they will be judged by scientific peers, not 24 year old law students with a year and half of aquarium keeping experience and no scientific training.
:rolleyes: I do have scientific training (at the very least, I have written/published a research paper while interning at the NIH).

I would link some of the hundreds, if not thousands, of peer reviewed works using aquariums, repeated measure ANOVA's, mesososms and nearly identical experimental designs, but why? This "internet reviewer" even used a non-peer reviewed hobby paper to quote and justify their position. Toonen did publish his work in a peer reviewed journal of which sihaya is apparently unaware, meaning that Toonen and Wee's data was usable, despite the variability.
Right, but Dr. Toonen used THREE tanks per treatment, not just one.

We do not even know if there are any significant results ourselves as we have not even finished entering data, and we do not know ourselves what data are amendable or usable to hypothesis testing or post hoc testing. We have not claimed to have found anything, and have presented nothing but qualitative data to date. Any critique would be a critique of....nothing.
I don't know why he says this. It's quite common for research designs to be critiqued before the research is done. For instance, you have to do it for most grant applications

I don't think anything more needs be said.
I respectfully disagree.
 

Sponsor Reefs

We're a FREE website, and we exist because of hobbyists like YOU who help us run this community.

Click here to sponsor $10:


Top