On April 19, 2020, the President of Ghana announcedto Ghanaians that he was, effective from April 20, lifting the “partial lockdown” imposed on the country since March 30, 2020.
He cited the “robustness of data” and the “constancy” of the situation as the basis for the decision, but he did not elaborate if by this he meant that the lockdown had achieved thegenerally cited purposeof that draconian form of epidemic control: reducing the number of new infections (called “incidence” in epidemiology). In later explanations to the media, the Presidential Advisor on Health would add some nuances to the effect that the purpose of the lockdown was to gather more insights into the nature of the disease’s spread so as to pave the way for more targeted measures. Such a step, in his view, was critical to preventing the possibility of a worse humanitarian emergency in poorer communities for whom the lockdown was exacting a frightening economic toll.
The inference here must be that the Administration had evaluated the state of the pro-poor economic reliefs it had rolled out at the beginning of the lockdown and determined that they couldn’t do the job of abating suffering, and possible social unrest, should the lockdown continue. Throughout all these explanations, the country was nevertheless not served with any data-backed models to enable independent researchers and other critical observers to analyze the totality of the situation for themselves. Hardly surprising then that before long senior public health figures outside the government, including a former Head of the Ghana Health Service, the country’s preeminent primary and secondary health agency, decided to break ranks and take serious issue with the quality and integrity of both the data and the decisions purportedly based on them.
I couldn’t suppress the wry smile on my face. On April 11th, at the peak of the partial lockdown, I had pleaded with the government to be more transparent and far more diligent than it had been to date in showing people in advance what specific metrics would prompt which specific actions. To avoid the charge of data manipulation and afterthought justifications of conclusions predetermined on the basis of factors other than the public interest, the public, I argued, has to be educated comprehensively about how different status indicators shall serve as triggers for specific actions. That required, of course, that said status indicators were themselves clearly defined, solidly rigorous, well thought through, and reasonably comprehensive. I am afraid to say that the government did not listen. Its status indicators remain hard to pin down and what can be gleaned from its interpretation of the data leads to ambiguity on the most important issues; such as whether to intensify or loosen lockdowns, whether to change tack or stay the course in its approach to “mass/community screening”, and whether to admit to an acceleration of incidence or continue to insist on this “constancy” theory.
The government’s earnest and vivacious “communicators” keep pushing one theme: everyone should keep calm, trust government-appointed and favoured “experts”, and have faith that all decisions are being driven by “science” and a dogged commitment to the public interest. Unfortunately for our good friends in the government’s various PR brigades, science and expertise are not hymn sheets from which a harmonized tune can be blared from loudspeakers on the high and imperious walls of Jubilee House.“Science”, whether in a pandemic or in normal times, is a highly variegated domain of knowledge, replete with competing worldviews among disciplinary specialists of many stripes. That is why almost every university and think tank in the world worth its salt in every sophisticated country has its own set of theories and models about the virus, complete with different, sometimes contradictory, trajectories and forecasts.
If Ghana were a more sophisticated country, there would have been more not less of the disputes amongst different types and grades of health “experts” witnessed in the aftermath of the President’s decision to lift the blockade. Another obvious truth in this matter is that an expert not armed with data is rarely more effective than the ordinary joe in making sense of emerging phenomena. By failing or refusing to publish sound models and put out enough data to aid serious analysis, the government consigned experts to the grapevine with the rest of us. As snippets of information and anecdotal evidence swirled around, contextless documents added a veneer of depth to opinions, until eventually, things came to a head when some of the country’s most eminent health thinkers openly accused the government of “massaging” the Covid-19 incidence and “observed prevalence” statistics. The two main labs in the country handling the surge in testing found themselves in the cross-fire, accused of misrepresenting their testing capacity.
The government was on the verge of squandering the trust it had built up in the early days of the pandemic when citizens rallied behind their leaders as predicted by the sociology of emergencies. In this brief article, I shall be shedding light on the two main data-related controversies – a) is the incidence rate of Covid-19 “stable” in Ghana? And (b) is “pooled sampling” a satisfactory rebuttal to those who question the effective testing capacity in Ghana? I shall be doing so with a view to reiterating my earlier points about the extreme importance of “trustworthy data in decision-making” during a crisis as we have now. From the onset of the pandemic in Ghana, the Ghana Health Service (GHS) has published periodic bulletins to inform the public about major developments.
Researchers have complained about the unavailability of deeper and broader data sets in a format that can be exported into spreadsheets to enable them to chart trend curves and develop elaborate and on the fly models to explain various facets of this unprecedented phenomenon. They have humbly requested for these data sets to be released promptly and consistently to allow for dynamic modelling and analysis, yet the GHS refuses to budge. A part of the reason may well be capacity. The country’s mass testing protocol has many gaps and lags. Clinical referrals for testing during routine surveillance (where people with Covid-19-like symptoms are identified by clinicians and sampled at a health facility) are currently not automated. The performance of the different tracing teams (public health personnel who actively search for cases in the community) varies. Collected samples have to be aggregated and sent to the various labs (virtually all of them are in the country’s two major cities of Accra and Kumasi) using an ad hoc supply chain.
Per WHO and US CDC standards, samples need a cold chain (between 2 and 8 degrees celsius) and if delays in dispatch will last beyond 5 days, dry ice is required for storage and transport. Ideally, samples should be transported in viral transport media (VTM), protein-antibiotic based mixtures, that some GHS tracing teams don’t have. Some laboratory scientists complain of the occasional sample coming in without even basic saline medium, or having been improperly sealed, or in such small volumes as to interfere with viral RNA extraction. In short, all the messiness of real-world supply chains. In a country with such weak health infrastructure, it is a miracle how the fine professionals working in the reference labs are still keeping things together.
The end result, however, is that there are inefficiencies in the data collection process, as I alluded to in my initial article. Until capacity is ramped up at all levels, the data will continue to be patchy. The delays and lumpiness in the release schedule, despite the widespread perception, are not all caused by the Ministry of Health and the Ghana Health Service’s insistence on “validating” the data before public release. But all this shouldn’t be a problem if the entire mass testing protocol was transparent to researchers and critical observers. Statistical methods exist to clean and fix data weaknesses and gaps. The real problem is disinterest on the part of the government’s Covid-19 response team in engaging candidly and openly with the research community on the data question. Even as the government’s communicators were justifying the decision to lift the lockdown on the grounds of accelerated tracing, tracing figures were actually declining in Accra. In the first week following the lockdown, tracing surged from 635 contacts reached to 5308. The sample collection rose from 589 to 4969. By 19th April, the day the announcement was made to lift the lockdown, contact tracing figures were down to 2049 and the daily tally of collected samples was as low as 1018. Obviously, the facts on the ground did not align with the view that lockdown measures were being replaced with aggressive contact tracing. Considering that infection stats are highly sensitive to overall levels of tracing and sample collection, the discrepancy is curious. Which brings us to the issue of the positive case ratio per population of the virus.
Apart from the fact that testing positive for the virus does not imply a clinical diagnosis of Covid-19, there is also the fact of our present misunderstanding of the role of viral load or concentration on both symptoms progression and detectability. These complexities require care and diligence in executing a mass testing protocol. Thus, when the President’s advisors conclusively argued in the days following the decision to lift the lockdown that because only 1042 out of 68,591 individuals (ergo, 1.52%) tested were positive, the degree of community spread was somewhat restrained, they were missing some very critical granular facts. The insight is not in these global numbers, made up as they are of apples and oranges forced into a mix. It can rather only be found by careful disaggregation and analysis. To illustrate, let me use the central hotspot of the epidemic in Ghana. I was lucky enough to get a hold of the Greater Accra Covid-19 situational report of 21st April 2020, just around the period the lockdown was lifted. The high-level breakdown of the global numbers as at that date was as follows:
Even at this high level, it can quickly be seen that the “positivity profile” of the tested individuals in the different groups listed above (those referred for testing because they were showing some symptoms; those identified for testing because they had come into contact with someone who tested positive; and those who were randomly tested because they live in an area considered as lying within a perimeter or cordon of risk as determined by the prevailing hotspot models of the GHS) vary considerably. Whilst those referred by clinicians for presenting Covid-19-like symptoms had a roughly 8.5% chance of testing positive, those who were identified because they had been in contact with a positive case had a slightly less than 8% chance.
More crucially, those earmarked for testing simply because they were caught in the GHS risk-based net had virtually no chance of testing positive. Different biostatisticians and epidemiologists may draw different conclusions. But one of the most plausible would be that the GHS risk-based community screening model is broken and should not be lumped together with the more epidemiologically grounded models of routine surveillance and primary contact tracing. True, Ghana’s routine surveillance programs have their own weaknesses. In 2010/2011, and again in 2017, the country’s capacity to detect outbreaks of H1N1 in time to avert mass casualties was tested and found wanting. In fact, in the 2017 wave, four KUMACA students who died of H1N1 at the Komfo Anokye Teaching Hospital were diagnosed post-mortem.
Eventually, 96 people would be detected but well past when a functional surveillance program should have done so. According to the present Auditor-General, the Veterinary Services Department failed or neglected to set up an Asian Influenza pandemic preparedness system in 2010, opting instead to devote the money to workshops and other such trifles. When the epizootic crisis hit, over 400,000 livestock belonging to poor Ghanaian farmers perished from H1N1. But all that is merely to say that the positivity rate for routine surveillance is very likely to be higher than 8.5%. Letting the unproven “community screening” numbers dilute the overall positive case ratio amounts to a failure of sound analysis. Even more so since the GHS refuses to publish any document to explain the logic driving it. Furthermore, enhanced granularity only deepens anxiety. The preliminary “hotspot perimeter designation model” orally described by the President’s coordinators of the Covid-19 response effort, recommended a concentration of effort in Ayawaso West.
At one point, it was even suggested that testing there should be universal and compulsory, only for the proclamation to be withdrawn without explanation or ceremony. It soon became clear that the transmission dynamics were far more complex. In a few days, the virus charged into Ayawaso Central, an extremely high-density, inner-city enclave of the city, where suburbs such as Nima, Maamobi and Kanda huddle tightly together. Then Accra Central (Jamestown, the High Street, the Central Business District, etc) took its turn. Before the most fascinating development of all, the emergence of Korle Klottey (Osu, Ridge, North Adabraka, Odorna, etc) as the fastest mutating hotspot of all. Ethnographic economic data shows trends tied to the commuting habits of specific pools of informal labour that reside in one enclave and ply particular types of trades in other enclaves. Rudimentary covariance analysis of the interrelationships among trends in shifting hotspots would have highlighted the spatio-economic links driving the spread of the virus from one high-density spot to the other. But serious urbanography, beyond pure epidemiology, would be required to deepen the insight, once again making a strong case for both a wider sharing of prompt and complete data and for maintaining a multidisciplinary stance.
The social relief strategy advised by such a stance would not have prioritized warm rations over invasive but respectful broader public health interventions beyond just disease surveillance. Communal toilets, communal baths, communal pipe stands and similar locations would have seen re-engineering to enhance a modicum of social distancing however difficult that prospect might be in places like Old Fadama, where human density at certain seasonal peaks exceeds 3000 per hectare. The distribution of hygiene products would have received the same attention as the sharing of warm rations. Knowing that Ayawaso East, Ayawaso West and Korle Klottey are the areas where Covid-19 related hospitalizations are likely to increase due to the growing trend of routine surveillance results, mobile screening facilities should have been deployed more aggressively in a kind of shifting sentinel strategy.
Accra Central and Ayawaso Central having become a source of worry because of the fast-growing number of asymptomatic individuals observed during contact tracing should have been earmarked for limited serological surveys as part of the post-lockdown measures. In short, the main point here is not to quibble over the President’s decision to lift the lockdown per se. The focus here is on highlighting what a truly data-driven set of measures would have looked like. And the other controversy over testing numbers? Here too, much of the hoopla could have been avoided had politicians not attempted to take credit for Ghana’s supposedly high ranking on testing league tables in Africa. The impression was created that the country’s performance was as a result of massive investments into RT-PCR platforms, reagents, and diagnostic assays. People with connections in the scientific community knew however that these resources were yet to be made available in any quantity capable of effecting such a dramatic transformation of the country’s testing capacity. Judging from the experience in other countries, the recently donated test kits from the Jack Ma Foundation were yet to be put to use because compatible reagents were not available.
No wonder then that these PR narratives triggered protests, most famously from the former Director-General of the Ghana Health Service itself. It turns out that the clever scientists at Noguchi had found a workaround: pooled sampling. It is true that the original algorithms and supporting logic for how to maintain experimental validity when running tests in aggregates date all the way to the work of political economist, Robert Dorfman, notably in his 1943 paper in the Annals of Mathematical Statistics (another toast to multidisciplinary thinking). But the sheer boldness to respond in this manner to the national call for a surge in testing even without the corresponding resources deserves respect. Contingent ingenuity in the laboratory does not, however, excuse incoherence at the level of national policy.
As Noguchi’s leadership freely admits, pooled sampling shall only remain effective if infection rates in Ghana are subdued. It is not for nothing that many reference labs around the world have not got around to accepting pooled sampling for routine Covid-19 case confirmation. Whilst India didso just a little over a week ago, the many caveats added by the country’s apex health authority, the India Council of Medical Research (ICMR), show clearly that concern over the very real risk of false negatives due to dilution remains a concern for many laboratory quality assurance and bioethics experts. The ICMR’s decision to impose a cap of five samples, a threshold Ghana initially adopted before “escalating” to 10 samples per well, speaks to the fear of overestimating diagnostic sensitivity thresholds.
The ICMR also went further to permit pooled sampling only if the pre-test probability of positivity is lower than 2%. Stanford’s Benjamin Pinsky, a clinical virologist, recently led a team to conduct mass community screening for Covid-19 (especially at sub-clinical level) in San Francisco. He would peg the suspected positivity ratio at 1%. These precautions put in place in other epidemiological contexts raise important points for our continued use of pooled sampling. Firstly, a pooled sampling protocol is highly responsive to the specifics of the test kit in use, the epidemiological background, and the goals of screening. Consequently, the protocols tend to be submitted to peer review. In Ghana, the protocol has not even been published. Secondly, there are some important ethical issues that arise when human subject testing protocols are changed midstream. Institutional Review Committee approval is typically required. In a public health emergency, national-level ethical clearance, as was the case in India, becomes important. Given that members of the medical fraternity in Ghana appeared unaware, I would reckon that this is yet to be done. At any rate, in India, the process was openly announced. It is fair to wonder if this is all mere bureaucracy. Alas, it isn’t. In routine surveillance referrals for testing, clinical outcomes for the individual remain important notwithstanding the prioritization of public health.
Differential diagnosis remains the standard of care in medical situations like Covid-19 where observed symptoms can be highly non-specific. Many respiratory pathogens could be implicated in the clinical presentation. (Even some health workers have taken to calling SARS-COV-2, the microbe that causes Covid-19, a “flu virus”, yet it belongs to a completely different family of viruses). In that regard, re-sampling for further tests could be warranted even in the event of a negative test result. In a pooled testing scenario, this situation is complicated, especially in the absence of patient consent. Moreover, because best practice also recommends the use of double probes to target different gene/ORF regions of viral RNA to increase sensitivity, a negative result can, theoretically, involve discordance between the two probes, which again may require retesting.
More crucially, in routine surveillance or enhanced contact tracing situation, molecular tests are contributory but not determinative in every instance. A high index of suspicion due to travel history, clinical observations and other factors might warrant re-confirmation. A protocol that conserves resources (and to a somewhat lesser extent, time) by eliminating individual retesting of negative samples in all cases risks bumping up against ethical norms. How to resolve the conundrum of reconciling the national demand for surged testing with the rights of the individual patient? Simple: segment the testing pool into its original cohorts. Mass community screening, which is the main bulk of current testing, has a limited connection to clinical management and should probably not generate any serious ethical issues in a pooled sampling regime. Routine surveillance and enhanced contacts tracing cases, on the other hand, present a clear challenge and are best not confirmed through pooled sampling. Especially considering the higher pre-test probability of positivity going by Ghana’s Covid-19 testing data, well above what stringent review boards elsewhere have determined.
To set minds at ease, the reference labs may consider independent IRB evaluation of the claims that even in the case of samples collected from asymptomatic individuals, presumably with very low viral loads, cDNA concentration techniques and additional amplification time are not required during thermocycling to bring the probability of false negatives within acceptable limits. I acknowledge the steady stream of papers in the protocol investigation literature providing reassurance about the merits of pooled testing, but those testimonials are precisely the kind of evidence that ethical committees are best qualified to weigh. The last strand of this particular controversy concerns the possibility of “multiple counting” of the same individual as a result of multiple tests per individual. A claim has been made that compartmentalization of results for recovering patients who are tested a total of three times to resolve a case addresses all the issues. Unfortunately, it does not. The different testing labs at present maintain separate indexes and case investigation form coding procedures.
There is currently no efficient way to harmonize and consolidate multiple case investigations of the same individual, especially also as the case investigation forms map to a unique sample ID but not to any patient ID at all. Multiple cases submitted to different labs would automatically count as separate cases. Multiple cases submitted to the same lab can be harmonized against a single patient if the data is de-identified. In a pooled sampling regime, it is problematic to de-identify the samples constituting every pool to check for history without defeating the original goal of saving time. So, here too, it is best for limitations to be openly acknowledged and solutions widely canvassed through open and sincere national conversations.
On the whole, Ghanaians seem quite impressed by the enthusiasm with which members of the country’s long-neglected scientific community have embraced their role in responding to the public health emergency. The constraints being imposed by politicians’ lack of “data candour” are however slowly undermining the confidence some sections of the society have in the respectful coexistence of science and politics that marked the early days of the government’s response.
Facebook
Twitter
Pinterest
Instagram
Google+
YouTube
LinkedIn
RSS