Psychology and Global Consciousness Measured Part 3

The GCP Experiment

Continued from

The GCP recorded its first data on August 4, 1998. Beginning with a few random sources, the network grew to about 10 instruments by the beginning of 1999, and to 28 by 2000. It has continued to grow, stabilizing at roughly 60 to 65 eggs by 2004.

The early experiment simply asked whether the network was affected when powerful events caused large numbers of people to pay attention to the same thing. This experiment was based on a prediction registry specifying a priori for each event a period of time and an analysis method to examine the data for changes in statistical measures. Various other modes of analysis including attempts to find general correlations of GCP statistics with other longitudinal variables have been considered, and continue to be developed.

Purpose

In the most general sense, the purpose of the project was and is to create and document a consistent database of parallel streams of random numbers generated by high-quality physical sources. The goal is to determine whether any correlations might be detectable of statistics from these data with independent long-term physical or sociological variables. In the original experimental design we asked the more limited question whether there is a detectable correlation of deviations from randomness with the occurrence of major events in the world.

Hypothesis

The formal hypothesis of the original event-based experiment is very broad. It posits that engaging global events will correlate with deviations in the data. The identification of global events and the times at which they occur are specified case by case, as are the recipes for calculating the variance deviations. This latitude of choice makes the original experiment complicated to analyse, but by standardizing the results, we can obtain a composite outcome. This constitutes a general test of the broadly defined formal hypothesis.

Analytical Recipes

The formal events are fully specified in a prediction registry. Over the years, several different analysis recipes were invoked, though most analyses specify either the “network variance” or the “device variance” method. Each recipe stipulates how the event statistic is calculated, by first specifying a block statistic within the blocked examination period and then a method for combining these to give an event statistic.

 

Compound Result

Over the six years since the inception of the project, 170 replications of the basic hypothesis test have been accumulated. The composite result is a statistically significant departure from expectation of 4 standard deviations. The combined result from these analyses thus gives support for the formal hypothesis, and this encourages a deeper look, beginning with a thorough re-analysis of the original findings, and proceeding to extensive analysis using other methods.

Sharpening the Focus

The focus of our effort turns now to a more comprehensive program of rigorous analyses and incisive questions intended to characterize the data more fully and to facilitate the identification of any non-random structure. We begin with thorough documentation of the analytical and methodological background for the main result, to provide a solid basis for new hypotheses and experiments. The goal is to increase both the depth and breadth of our assessments, to develop sound interpretations, and ultimately to elucidate the meaning of the original findings.

More reading

error: Content is protected !!