- #36
collinsmark
Homework Helper
Gold Member
- 3,411
- 2,953
Yes, if you were to flip a coin fair ten times in a single experiment, the likelihood of the coin coming up all heads on a given experiment is 1/210 or about 1 chance in 1024. If that happened on the first experimental attempt, it would be a statistical fluke. Not at all impossible but very unlikely. And if an experimenter did not know if the coin was fair or not, he might take that as positive evidence against the coin being fair, and of meriting further trials. But I'm not sure how the analogy applies to this this set of experiments though. Are you suspecting that the author of the study repeated the experiment perhaps hundreds of times, each with 50 or 100 people in each experiment (many thousands or tens of thousands of people total), and then cherry picked the best results? If so, that would be unethical manipulation of the data (and very costly too ). [Edit: And besides, there are easier ways to manipulate the data.]jarednjames said:No misinterpretation about it, that is what the article said.53% means you are only 3% over the expected 50/50 odds of guesswork. Without a much larger test group that 3% doesn't mean anything. It could simply be a statistical anomaly.
Any of you seen the Derren Brown episode where he flips a coin ten times in a row and it comes out a head each time?
The test group is too small and this 3% doesn't show anything. If I sat in a room and flipped a coin 100 times, calling heads each time, there is a an equal chance that heads will come up as tails and so although you'd expect an even spread of heads vs tails, however there is a chance that you get more heads than tails and as such would show me as being correct >50% of the time. But there's nothing precognitive about that.
Also, as per the Derren Brown experiment, I flip a coin ten times and could call heads ten times in a row and each coin toss come out heads. Again, nothing precognitive there. Despite what it looks like.
And forgive me for my confusion, but I'm not certain where you are getting the 53%? In my earlier reply, I was talking about the specific set of experiments described in the study as "Experiment 8: Retroactive Facilitation of Recall I" and "Experiment 9: Retroactive Facilitation of Recall II." These are the experiments where participants are asked to memorize a list of words, and try to recall the words. Then later, a computer generated random subset of half the total words are given to the subjects to perform "practice exercises" on, such as typing each word. The study seems to show that the words recalled are correlated to the random subset of "practice" words that was generated after the fact. Those are the only experiments I was previously discussing on this thread. I haven't even really looked at any of the other experiments in the study.
To demonstrate the statistical relevance further, I've modified my C# a little bit to add some more information. I've attached it below. Now it shows how many of the simulated experiments produce a DR% that is greater than or equal to the DR% reported in the study. My results show 1 in 56 chance, and a 1 in 300 chance, for achieving a DR% that is greater than or equal to the mean DR% reported in the study, for the first and second experiment respectively (the paper calls them experiment 8 and experiment 9). The program simulated 10000 experiments in both cases -- the first with 100 participants per experiment, the second with 50, as per the paper.
Here are the possible choices of interpretations, as I see them:
(I) The author of the paper might really be on to something. This study may be worth further investigation and attempted reproduction.
(II) The data obtained in the experiments were a statistical fluke. However, for the record, if the experiment was repeated many times, the statistics show that the chances of achieving a mean DR%, at or above what is given in the paper, merely by chance and equal odds, are roughly 1 out of 56 for the first experiment (consisting of 100 participants, mean DR% of 2.27%) and roughly 1 out of 333 for the second experiment (consisting of 50 participants, mean DR% of 4.21).
(III) The experiments were somehow biased in ways not evident from the paper, or the data were manipulated or corrupted somehow.
(II) The data obtained in the experiments were a statistical fluke. However, for the record, if the experiment was repeated many times, the statistics show that the chances of achieving a mean DR%, at or above what is given in the paper, merely by chance and equal odds, are roughly 1 out of 56 for the first experiment (consisting of 100 participants, mean DR% of 2.27%) and roughly 1 out of 333 for the second experiment (consisting of 50 participants, mean DR% of 4.21).
(III) The experiments were somehow biased in ways not evident from the paper, or the data were manipulated or corrupted somehow.
In my own personal, biased opinion [edit: being the skeptic that I am], I suspect that either (II) or (III) is what really happened. But all I am saying in this post is that the statics quoted in the paper are actually relevant. Granted, a larger sample size would have been better, but still, even with the sample size given in the paper, the results are statistically significant. If we're going to poke holes in the study, we're not going to get very far by poking holes in the study's statistics.
Below is the revised C# code. It was written as console program in Microsoft Visual C# 2008, if you'd like to try it out. You can modify the parameters near the top and recompile to test out different experimental parameters and number of simulated experiments.
(Again, pardon my inefficient coding. I wasn't putting a lot of effort into this).
Code:
//Written by Collins Mark.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Precognition_tester
{
class Program
{
static void Main(string[] args)
{
int NumLoops = 10000; // <== number of experiments
int SampleSize = 50; // <== number of participants in each experiment.
// This represents the paper's mean DR% threshold. Used for
// comparison of simulated mean DR% values. Should be 2.27
// for SampleSize of 100, and 4.21% for SampleSize of 50,
// to compare directly with paper's results.
double DRcomparisonThreshold = 4.21;
double memoryMean = 18.4; // <== averge number of words recalled.
double memoryStDev = 5; // <== standard deviation of number of words
// recalled (I had to guess at this one)
int ItemsPerCat = 12;
int i;
Random uniRand = new Random();
// Load the category lists.
List<string> foodList = new List<string>();
foodList.Add("HotDogs");
foodList.Add("Hamburgers");
foodList.Add("Waffles");
foodList.Add("IceCream");
foodList.Add("Coffee");
foodList.Add("Pizza");
foodList.Add("Guinness");
foodList.Add("SausageEggAndCheeseBiscuit");
foodList.Add("Toast");
foodList.Add("Salad");
foodList.Add("Taco");
foodList.Add("Steak");
List<string> animalList = new List<string>();
animalList.Add("Cat");
animalList.Add("Dog");
animalList.Add("Snake");
animalList.Add("Whale");
animalList.Add("Bee");
animalList.Add("Spider");
animalList.Add("Elephant");
animalList.Add("Mongoose");
animalList.Add("Wambat");
animalList.Add("Bonobo");
animalList.Add("Hamster");
animalList.Add("Human");
List<string> occupationsList = new List<string>();
occupationsList.Add("Engineer");
occupationsList.Add("Plumber");
occupationsList.Add("TalkShowHost");
occupationsList.Add("Doctor");
occupationsList.Add("Janitor");
occupationsList.Add("Prostitute");
occupationsList.Add("Cook");
occupationsList.Add("Theif");
occupationsList.Add("Pilot");
occupationsList.Add("Maid");
occupationsList.Add("Nanny");
occupationsList.Add("Bartender");
List<string> clothesList = new List<string>();
clothesList.Add("Shirt");
clothesList.Add("Shoes");
clothesList.Add("Jacket");
clothesList.Add("Undershorts");
clothesList.Add("Socks");
clothesList.Add("Jeans");
clothesList.Add("Wristwatch");
clothesList.Add("Cap");
clothesList.Add("Sunglasses");
clothesList.Add("Overalls");
clothesList.Add("LegWarmers");
clothesList.Add("Bra");
// Add elements to superset without clustering
List<string> superset = new List<string>();
for (i = 0; i < ItemsPerCat; i++)
{
superset.Add(foodList[i]);
superset.Add(animalList[i]);
superset.Add(occupationsList[i]);
superset.Add(clothesList[i]);
}
mainLoop(
NumLoops,
SampleSize,
DRcomparisonThreshold,
ItemsPerCat,
memoryMean,
memoryStDev,
superset,
foodList,
animalList,
occupationsList,
clothesList,
uniRand);
}
// This is the big, main loop.
static void mainLoop(
int NumLoops,
int SampleSize,
double DRcomparisonThreshold,
int ItemsPerCat,
double memoryMean,
double memoryStDev,
List<string> superset,
List<string> foodList,
List<string> animalList,
List<string> occupationsList,
List<string> clothesList,
Random uniRand)
{
// Report something to the screen,
Console.WriteLine("Simulating {0} experiments of {1} participants each", NumLoops, SampleSize);
Console.WriteLine("...Calculating...");
// Create list of meanDR of separate experiments.
List<double> meanDRlist = new List<double>();
// Initialze DR comparison counter.
int NumDRaboveThresh = 0; // Number of DR% above comparison thesh.
// Loop through main big loop
for (int mainCntr = 0; mainCntr < NumLoops; mainCntr++)
{
// create Array of participant's DR's for a given experiment.
List<double> DRarray = new List<double>();
//Loop through each participant in one experiment.
for (int participant = 0; participant < SampleSize; participant++)
{
// Reset parameters.
int P = 0; // number of practice words recalled.
int C = 0; // number of control words recalled.
double DR = 0; // weighted differential recall (DR) score.
// Create recalled set.
List<string> recalledSet = new List<string>();
createRecalledSet(
recalledSet,
superset,
memoryMean,
memoryStDev,
uniRand);
// Create random practice set.
List<string> practiceSet = new List<string>();
createPracticeSet(
practiceSet,
foodList,
animalList,
occupationsList,
clothesList,
ItemsPerCat,
uniRand);
// Compare recalled count to practice set.
foreach (string strTemp in recalledSet)
{
if (practiceSet.Contains(strTemp))
P++;
else
C++;
}
// Compute weighted differential recall (DR) score
DR = 100.0 * (P - C) * (P + C) / 576.0;
// Record DR in list.
DRarray.Add(DR);
// Report output.
//Console.WriteLine("DR%: {0}", DR);
}
// record mean DR.
double meanDR = DRarray.Average();
meanDRlist.Add(meanDR);
// Update comparison counter
if (meanDR >= DRcomparisonThreshold) NumDRaboveThresh++;
// Report Average DR.
//Console.WriteLine("Experiment {0}, Sample size: {1}, mean DR: {2}", mainCntr, SampleSize, meanDR);
}
// Finished looping.
// Calculate mean of meanDR
double finalMean = meanDRlist.Average();
// Calculate standard deviation of meanDR
double finalStDev = 0;
foreach (double dTemp in meanDRlist)
{
finalStDev += (dTemp - finalMean) * (dTemp - finalMean);
}
finalStDev = finalStDev / NumLoops;
finalStDev = Math.Sqrt(finalStDev);
// Report final results.
Console.WriteLine(" ");
Console.WriteLine("Participants per experiment: {0}", SampleSize);
Console.WriteLine("Number of separate experiments: {0}", NumLoops);
Console.WriteLine("mean of the mean DR% from all experiments: {0}",
finalMean);
Console.WriteLine("Standard deviation of the mean DR%: {0}", finalStDev);
Console.WriteLine("");
Console.WriteLine("Comparison theshold (from study): {0}", DRcomparisonThreshold);
Console.WriteLine("Total number of meanDR above comparison threshold: {0}", NumDRaboveThresh);
Console.WriteLine("% of meanDR above comparison threshold: {0}%", 100.0*((double)NumDRaboveThresh)/((double)NumLoops));
Console.ReadLine();
}
static double Gaussrand(double unirand1, double unirand2)
{
return (Math.Sqrt(-2 * Math.Log(unirand1)) * Math.Cos(2 * Math.PI * unirand2));
}
static void createRecalledSet(List<string> recalledSet, List<string> superSet, double mean, double stdev, Random unirand)
{
// Determine how many words were recalled. (random)
double unirand1 = unirand.NextDouble();
double unirand2 = unirand.NextDouble();
while (unirand1 == 0.0) unirand1 = unirand.NextDouble();
while (unirand2 == 0.0) unirand2 = unirand.NextDouble();
double gaussrand = Gaussrand(unirand1, unirand2);
gaussrand *= stdev;
gaussrand += mean;
int recalledCount = (int)gaussrand;
if (recalledCount > superSet.Count) recalledCount = superSet.Count;
// Create temporary superset and copy elements over.
List<string> tempSuperSet = new List<string>();
foreach (string strTemp in superSet)
{
tempSuperSet.Add(strTemp);
}
// Randomize temporary superset.
shuffleList(tempSuperSet, unirand);
// Copy over first recalledCount items to recalledSet.
for (int i = 0; i < recalledCount; i++)
{
recalledSet.Add(tempSuperSet[i]);
}
}
static void createPracticeSet(
List<string> practiceList,
List<string> foodList,
List<string> animalList,
List<string> occupationsList,
List<string> clothesList,
int itemsPerCat,
Random uniRand)
{
List<string> tempFoodList = new List<string>();
List<string> tempAnimalList = new List<string>();
List<string> tempOccupationsList = new List<string>();
List<string> tempClothesList = new List<string>();
// load temporary lists.
foreach (string strTemp in foodList)
tempFoodList.Add(strTemp);
foreach (string strTemp in animalList)
tempAnimalList.Add(strTemp);
foreach (string strTemp in occupationsList)
tempOccupationsList.Add(strTemp);
foreach (string strTemp in clothesList)
tempClothesList.Add(strTemp);
// Shuffle temporary lists
shuffleList(tempFoodList, uniRand);
shuffleList(tempAnimalList, uniRand);
shuffleList(tempOccupationsList, uniRand);
shuffleList(tempClothesList, uniRand);
// Load practice list
for (int i = 0; i < itemsPerCat / 2; i++)
{
practiceList.Add(tempFoodList[i]);
practiceList.Add(tempAnimalList[i]);
practiceList.Add(tempOccupationsList[i]);
practiceList.Add(tempClothesList[i]);
}
// Shuffle practice list
shuffleList(practiceList, uniRand);
}
// method to shuffle lists.
static void shuffleList(List<string> list, Random unirand)
{
List<string> shuffledList = new List<string>();
while (list.Count() > 0)
{
int indexTemp = unirand.Next(list.Count());
shuffledList.Add(list[indexTemp]);
list.RemoveAt(indexTemp);
}
foreach (string strTemp in shuffledList) list.Add(strTemp);
}
}
}
Last edited: