Back


Robert Ting-Yiu Chung
(Director of Public Opinion Programme, the University of Hong Kong)
 
Translated by Chan Suet Lai
(Research Executive, Public Opinion Programme, the University of Hong Kong)
 

Note: This article represents the view of the author and not the University of Hong Kong.

 

The supporting rates of Chief Executive (CE) Donald Tsang achieved record high again recently. These figures have created a positive environment for the upcoming Policy Address. Since popularity figures have ups and downs, the author would like to relay the discussion on the fluctuations of the CE popularity figures before the debate of the upcoming Policy Address commences.

 

This article responds and provides supplementary information to a published article in Hong Kong Daily News written by Wong Ka-ying, Associate Professor from the Hong Kong Institute of Asia-Pacific Studies, Chinese University of Hong Kong (CUHK), regarding the discrepancies between the survey findings recorded by the Hong Kong Institute of Asia-Pacific Studies and the University of Hong Kong Public Opinion Programme (HKUPOP).

 

Wong Ka-ying has suggested three reasons to explain the discrepancies among the findings. Firstly, it is the problem of sampling error. In the article, he wrote, "The average sample size is 600 to 800 in the surveys conducted by CUHK, whereas around 1,000 samples by the University of Hong Kong ...... If we take a look at the CE popularity figures recorded by these two universities, the range of sampling error involved should be 1 to 1.5 marks. In other words, it is not surprising that the 2- to 3-score discrepancies between the two findings are actually due to the sampling errors.

 

Sampling error is a very complicated issue; it involves random and non-random sampling errors. Solely from the perspective of the error itself, it is hard to draw any conclusion from the sampling errors of rating or sampling errors of percentage. Nevertheless, there is only one irregular fluctuation in every 20 surveys, which is an acceptable natural phenomenon.

 

Many research institutes have tried to control and compress non-random errors in many ways, these methods include strict randomization process, quality control, statistical adjustments, etc. Take the HKUPOP's surveys as an example, according to their general practice, all raw figures have been weighted according to the gender-age distribution of the Hong Kong population obtained from the 2001 Population Census. Readers should read the survey report carefully and pay attention to the description of their survey methodologies and sampling methods in detail.

 

Wong Ka-ying has suggested the second reason which is related to the survey period and frequency. He wrote, "CUHK only conducts (the surveys) once a month, the survey period is end of the month; whereas the HKUPOP carries out the surveys twice every month, and they are conducted at the beginning and end of each month ......The more meaningful way is to compare the surveys conducted by the two universities at the same period (at the end of each month). Despite of this, since it is very rare that there is an overlap between the two survey periods, the meaning of the above-mentioning comparisons is for cross-reference only. Even one or two days' time difference can lead to rapid changes in the CE popularity ratings due to sudden social and political events."

 

This is an extremely important analysis; the author would like to explain with a recent example. In late August this year, when Donald Tsang's popularity rating dropped to the record low level of the previous CE, the assault case of the little boy and CE's prospective visit to the Pearl River Delta with all Legco members were suddenly announced. The two events happened on August 25 and 30 respectively, and the mass medias had a big headline commentary about Donald Tsang's statement 'both the people and god are angry' on August 27, and then on August 31 they had another headline announcement of visiting the Pearl River Delta with all Legco members.

 

HKUPOP coincidentally conducted regular opinion surveys from August 22 to 25, and found that the CE received 64.8 marks, whereas the Hong Kong Institute of Asia-Pacific Studies, CUHK carried out their surveys from August 24 to 26, and found that the CE scored 68.4 marks, the difference between the two scores is 3.6 marks. HKUPOP's subsequent survey was conducted from September 1 to 5, and the CE received a score of 66.7marks. According to the per day figures, CE Donald Tsang's popularity continued to drop from August 22 to 25, but it rises and remains stable afterwards. The conclusion is crystal clear; Donald Tsang's popularity trends encountered a tremendous change from August 26 to 31, when the news about the assault case of the little boy and Legco members' prospective visit to the Pearl River Delta broke out.

 

The author has commented in the HKUPOP press release, "The popularity of Donald Tsang has rebounded in early September ......Tsang's reaction towards the assault case of the little boy, and his prospective visit to the Pearl River Delta with all Legco members, have probably boosted his popularity." The author did not quote the per-day figures to explain it in detail because the sample size is too small for the per-day figures, and the sampling error is relatively larger in this case. Moreover it is not our normal practice to do so. Nevertheless, the relationship is very obvious from the statistics. The author predicts if Wong Ka-ying could analyze the per-day figures from August 24 to 26 conducted by his institute, they should be able to see the public opinion effect of the Donald Tsang's 'both the people and god are angry' statement after the assault case of the little boy. Due to the fact that the CUHK's survey was exactly conducted before the incidents and when they were over, whereas the HKUPOP used two surveys to predict the outcomes, the findings between the two institutes are certainly different.

 

Wong Ka-ying has also listed out the third reason is due to the difference in the survey design of the two universities, he wrote "HKUPOP has only one simple question ......whereas CUHK's surveys require the respondents to rate and evaluate the performance of the CE on 11 different areas ...... HKUPOP measures the support rating of the CE, but CUHK evaluates the CE's performance. The two concepts are different ...... CUHK's findings fluctuate less, while HKUPOP's fluctuate more. The core reason is because CUHK's surveys are less dependent on the surrounding political environment than HKUPOP's.

 

The author would like to explain and provide supplementary information here. The usual practice of HKUPOP is to place the questions about CE popularity rating at the very beginning of the questionnaire, and intentionally record the impression rating in an unaided manner. Of course we also understand that the first few questions may prime the respondents in their thinking and may affect their response to the subsequent questions. But since there is no perfect solution under this circumstance, we would like to ensure the scores received by the chief leader of Hong Kong, no matter the Governors in the past or the current Chief Executive, can be comparable in the long run on a simple but secure basis.

 

In sum, based on different research methodologies, survey periods and frequency, it is not surprising that the figures recorded by CUHK or HKUPOP, or even other professional survey institutes may differ. The more important issue here should be whether the macro-trends of these surveys are identical, while if the micro-fluctuations are explainable.

 

There is one more very important point-to-note here, those survey organizations should follow the international code of ethics in reporting the survey methodologies, demographics information, questionnaire content, survey periods, sampling methods, response rates, weighting procedures and sampling errors when releasing their figures. Once the information are open to the public and the operation is transparent, all sorts of problems can be solved.

 
Table: Donald Tsang's supporting rating - Per day figures
  Date  22/8   23/8   24/8   25/8   1/9   2/9   5/9   6/9   7/9  
  Rating  66.3   65.6   65.6   63.0   67.4   66.1   65.7   67.2   67.1  
  Sampling error*  +/- 2.6   +/- 2.2   +/- 2.2   +/- 2.0   +/- 3.0   +/- 2.4   +/- 2.2   +/- 2.0   +/- 2.2 
  Rating  64.8   66.7 
  Sampling error*  +/- 1.2   +/- 1.0 

* "95% confidence level" means that if we were to repeat a certain survey 100 times, using the same questions each time but with different random samples, we would expect 95 times getting a figure within the error margins specified.