The “Membership Satisfaction Study” that PRSA chair Rosanna Fiske said shows members are “incredibly satisfied,” that member satisfaction has “improved since 2008,” and that there is a “high level of likely renewal” are conclusions that are unjustified under guidelines of the American Assn. for Public Opinion Research.

Since all 21,000 members were sent the Society questionnaire, the proper “random sampling” did not take place under the guidelines of AAPOR. Only 1,126 members replied or about 5%

David Rockland
Rockland

The AAPOR calls this “SLOP,” standing for “self-selected listener opinion poll.”

“Respondents who volunteer to participate in such surveys tend to be more extreme or otherwise very different in their views than those who do not,” says AAPOR. “In no way can they be said to be representative of the population, so the survey results cannot be used to say anything useful about a target population."

Bob Conrad, Ph.D., who operates a blog called “The Good, The Bad, The Spin," has written three critiques of the PRS research, the third and latest quoting the guidelines of AAPOR.

Rockland Taken to Task


Conrad lashes out at David Rockland, Ph.D., head of global research for Ketchum, which did the research along with Braun Research, for saying, “From my personal perspective, the worst thing one can do is spend time trying to find what could be right or wrong in the data, versus taking action in continuing to move the Society in the positive direction it is going.”

Says Conrad: “The Society would prefer this positive direction not be muddied by criticism. When Society leaders make grandiose claims about satisfaction among its members, and fail to provide supporting information used to make those pronouncements—in contradiction to its own ethics code and best practices recommendations—the Society expects its members to accept these claims without question. The net outcome of the survey is that there are no problems evident in the results, only a few ‘opportunities for improvement.’”

Society Linked with ASA


Ironically, Fiske had announced on Sept. 14 that it had prepared with the American Statistical Assn. a “best practices guide” for the use of statistics in PR campaigns.

Told about the apparent violation by the Society of guidelines of AAPOR, Ronald Wasserstein, executive director of the ASA, said the ASA “is glad to have partnered with the Society to do this.”

He saw no problem with the way the Society had just presented its research results.

Said Wasserstein in an e-mail: “The purpose of the best practices guide is to help PR professionals become increasingly aware of how to evaluate stastitical information carefully and critically, and the ASA is glad to have partnered with the PR Society to do this. If it helps people think carefully about and, when appropriate, debate vigorously about the strength and weaknesses of any statistical study, including one done by the PR Society (or the ASA for that matter), then the guide is serving its purpose.”

Rockland Cites Other Polls


Rockland, in defending the work of Ketchum and Braun, said that “Most surveys you see in the news have a sample size of 1,000 for the entire American public. Results of this study are projectable to the overall populations within the respective margins of error at the 95% confidence level.”

He added that the “responses [to the PRS poll] were weighted to the overall profile of the Society membership in terms of tenure in the PR industry. This is to ensure results approximate the membership as closely as possible and is a standard practice in survey research.”

He said every survey “has its strengths and weaknesses” but the membership survey “has statistical robustness, consistency over time and is a solid piece of work on which to drive the organization forward.”

Conrad, however, says that according to survey experts, “higher response rates are only needed up to a point if the actual people surveyed are randomly selected. The Society put out a call to its entire membership and surveyed them online, another variant that can skew the validity of the responses. Surveying an entire population and then receiving a 5% response rate puts the overall responses in doubt since it is unlikely that 5% represents the entire Society membership.”