Much has been written about Nassim Taleb's coauthored paper arguing that the precautionary principle dictates that we should avoid GMOs. Given the prominence of the author and his willingness to berate detractors, the paper has received more attention than the ideas in the paper merit.
This piece by Stuart Hayashi raises an excellent point. The issue shouldn't be about the presence of risk per se but about risk on the margin. How much riskier (or less risky) are GMOs compared to other techniques? As Hayashi point's out, Taleb's argument is akin to running an experiment without the control. There is an implicit assumption that using GMOs (the experiment) are unambiguously riskier than not (the control).
Hayashi's post had an summary description of Taleb's main argument, which also shows how the same sort of logic can be used to argue that GMOs should be adopted. I've taken Hayashi's description of Taleb's argument and replaced a few of Hayashi's words with my own in brackets:
"The argument is as follows. If we talk about the risk [likelihood] of a GMO doing damage [creating benefits] on any one particular day, it seems that that risk [potential benefit] is minuscule. But what is the statistical risk[likelihood] of a GMO inflicting harm [being created that creates enormous benefits] one day . . . eventually? As time advances, that risk[likelihood] of a GMO eventually causing turmoil [great good] increases exponentially . . . Therefore, the argument concludes that as long as transgenic technology is employed, it is inevitable that one day, something devastating [wonderful] concerning GMOs will occur. Therefore, the one method whereby we can guard [help secure] ourselves against this otherwise-impending harm [benefit] is to avoid [promote] usage of genetic engineering"
Even if Taleb's argument is right, it must also be right for any number of other possible risks we face: from say, new diseases that come about from interacting with domesticated or wild animals, from risks of using alternative energy sources (curiously, Taleb says nuclear energy is excluded because "the nature of these risks has been extensively studied"), from risks of potentially electing a warmongering despot, from risks of developing robots and artificial intelligence, from risks of comets passing by the earth, from risks of returning from space travel, risks from conventional plant breeding, etc, etc. It is not an argument that is anyway unique to risks from GMOs. Maybe Taleb is right and we are all just sitting around waiting for some worldwide disaster. Even if that's true, I seriously doubt it will be the GMOs that get us.