Dear colleagues, My name is Jim Thomas. I am a researcher and consultant for Civil Society organisations tracking the implications of new and emerging technologies. I am also a current member of the mAHTEG on Synthetic Biology. Thank you for the opportunity to make this submission in this open forum. I apologise in advance that this is a long submission but I believe this topic is _incredibly_ urgent and now is the right moment for the Parties to the CBD to proactively initiate the work necessary to understand , assess and engage with the implications of how Artificial Intelligence and Machine learning is integrating with Synthetic Biology. The current speed, investment and shifts in technical and corporate developments in this arena should have us all sitting up in our seats with great attention and focus to addressing this topic.
I'd start with a quote:
“Limits are now being breached. We are approaching an inflection point with the arrival of these higher order technologies, the most profound in history. The coming wave of technology is built primarily on two general-purpose technologies capable of operating at the grandest and most granular levels alike: artificial intelligence and synthetic biology” – Mustafa Suleyman Co-founder DeepMind (now Google) and Inflection AI, The Coming Wave -Technology, Power and the 21st century’s Greatest Dilemma, p55
-----
‘Synthetic biology’ as a field was founded 2 decades ago upon the use of large sets of genomic data (big data) and new computational tools for rational design of biological parts, organisms and systems. As the available underlying digital tools for data processing and design transform, so the abilities , scope and potential impact of Synthetic biology as a field and industry (including the potential impact on the natural world and the aims of the convention) is also now rapidly transforming in front of our eyes.
Generative AI as Syn Bio Gamechanger:
Regarding integration of algorithmic processes and AI, some synthetic biology firms and practitioners (eg Amyris Biotechnologies, Zymergen, Gingko Bioworks) have used forms of artificial intelligence, including neural nets, for over a decade to sort through genomic data and select viable sequences as part of their genome-design processes as well to amass large genomic libraries. However in the past year the release of massive transformer -based or diffusion-based AI ‘foundation models’ (such as Open AI’s ‘Chat GPT’ , Google’s ‘Bard’, Meta’s ‘Llama’ or Stability AI’s ‘Stable Diffusion’ ) has effected a historic switch in the AI space that is now also creating a parallel switch in the Syn Bio space. This is a switch that Nvidia (the leading producer of AI chips) describes as a move from “discriminative AI to generative AI” and in biotech it echoes a switch in the field from genomics to a more mature synthetic biology itself.
Whereas the ‘discriminative AI ‘ of the past decade was about sorting differences and identifying clusters and outliers (ie high throughput automated analysis) , Generative AI is about directly generating novel forms (ie high throughput automated de novo design). So todays ‘large language model’s such as ChatGPT or Dall-E respond to human natural language prompts by automatically stitching together complex and compelling texts, images or videos that appear (and basically are) entirely new and then outputting them into the world.
They do this by leveraging massive computational machine learning analysis (training) of the relationships of elements within billions of ingested texts and images and then subsequently fine tuning the model on more specialized datasets to generate probabilistic algorithms. The generative systems then generate outputs coherent with the algorithmic rules the trained model has deduced. This creates new variations of text or image that satisfy the users request for a novel synthetic media product with certain parameters – at lightning speed for hundreds of millions of users.
Generative Ai tools such as ChatGPT (which went into public use almost exactly a year ago) represent the fastest ever commercial take-up and development of a new technology in history and as such are generating astronomic sums of speculative investment for tech investors. The high expectation of large and near-term commercial returns is driving the direction, nature and speed of real word development. These tools and their results arriving so fast into commercial use stretch and break our governance mechanisms across many domains in highly challenging ways.
The next Big tech investment? – a rush to AI/SynBIo:
Having mastered AI text, image, sound and video generation , the leading AI firms (which are also the world’s largest companies by market cap) are now training their attention on exploiting other commercially valuable forms of ‘language’ that they can deploy their ‘large language models’ (and similar) on. In particular across the tech industry there is incredible excitement to make the biological ‘languages’ of genomics and proteinomics (and by extension the field of Synthetic biology) the next multi-trillion dollar commercial frontier for the generative AI revolution to ‘disrupt’ and deliver investor payouts – this having already spectacularly ‘disrupted’ graphic design, script writing, advertising, movies, the legal profession, journalism etc. The senior director of AI research at NVIDIA , Anima Anandkumar, describes an example of how AI corporate leaders are making this switch to applying their generative AI capabilities to synthetic biology by describing Nvidia’s GenSLM large language model:
“Rather than generating English language or any other natural language why not think about the language of the genomes. You know, we took all the DNA data that is available - both DNA and RNA data, for virus and bacteria that are known to us -about a hundred and ten million such genomes. We learnt a language model over that and then we can now ask it to generate new genomes” (source: Nvidia - The AI Podcast - Anima Anandkumar on Using Generative AI to Tackle Global Challenges – Ep203 -
https://soundcloud.com/theaipodcast/anima-anandkumar)
As a demonstration of GenSLM, Nvidia recently ‘fine tuned’ their genomic language model based on 110 million genomes on a further 1.5 million COVID viral sequences in order to generate novel coronavirus variants (to aid in pandemic prediction and vaccine design (see
https://blogs.nvidia.com/blog/generative-ai-covid-genome-sequences/ ) – although this also has dual use concerns) . Amongst the synthetic variants of COVID that Nvidia created digitally by AI are strains that closely match actual recent strains that have emerged in nature since those in the training set. This work is not just about covid. NVIDIA emphasizes that the underlying genomic large language model (GenSLM) could now be fine-tuned to create de-novo genomes of other viruses or bacteria, enabling new syn bio microbes to be automatically generated. In effect Nvidia has created a ‘ChatGPT for microbial design’. It should be noted that Nvidia – a chipmaker and the worlds 6th largest corporation by market cap- was not previously seen as a life sciences company.
Nvidia is far from the only trillion dollar tech giant moving full speed into applying generative AI to Syn Bio entities and parts. Meta (Facebook), Microsoft (who now control Open Ai) , Alphabet (Google) and Stability AI are all investing heavily in developing generative AI tools for synthetic biology. The first 3 of these are also among the 7 richest corporations in the world.The established corporate giants of the biotech world (eg Bayer, Syngenta, Corteva) are also using this generative Ai approach or contracting with smaller firms that employ it on their behalf. One recent report by a UK AI think ATnks, The Ada Lovelace institute, suggests that the market for AI-driven genomics technologies could reach more than £19.5 billion by 2030, up from just half a billion in 2021. – see
https://www.adalovelaceinstitute.org/report/dna-ai-genomics/While the visceral impact of generative Ai is already being felt across many other economic sectors (eg. in entertainment, law, education, advertising), Biotech and AI leaders alike are touting that applying generative AI to Synthetic Biology in particular is going to be a much more explosive act of “disruption” than other fields so far – predicting it may precipitate a ‘splitting the atom’ moment for AI. In his recent bestseller “The Coming Wave’ Deep Mind (now Google) co-founder Mustafa Suleyman names the current coming together of generative AI with Syn Bio as the most significant “superwave” that technologists have ever seen. Jason Kelly, CEO of Syn Bio ‘unicorn’ Gingko Bioworks, who recently inked a five year partnership with Google to train large Llanguage Models for synthetic biology, describes the exceptional commercial opportunity of applying generative Ai to Syn Bio like this:
“Here’s why Bio is particularly interesting for folks who are interested in AI: the idea of a foundation model plus fine tuning with specialized data - people in AI understand that. Lets try that with one of the categories of English - lets say ‘legal’ . That thing has to compete with a lawyer at Robeson Grey trained for 15 years, taught by other humans, writing contracts designed to be understood by human brains… in English (a language that coevolved with our brains). That also gives us leverage from how our brains work - and so we are asking these computer brains - neural nets - to compete with us on our turf -it’s a pretty high bar that its got to compete with. Now lets go over into biology. I remind you it runs on code (sequential letters) feels a lot like language - but it aint our language, we did not invent it ,we do not speak it. we do not read it, or write it and so I feel like these computer brains are going to kick our ass a lot faster in this domain than they do in English ... if you are looking to understand where Ai is really going to flip the script - not be a low level Clay Christensen disruption which is what’s happening in English - but rather be a like splitting the atom: its bio.”
(Jason Kelly speaking on ‘No Priors’ Podcast ep34 – see 12:50 at
https://www.youtube.com/watch?v=snt-fMsCDVI)
Synthetic Protein engineering :
Gingko’s AI collaboration with Google is initially focusing on using generative AI to design proteins drawing on their in-house codebase of 2 billion protein sequences. Jason explains that “The new idea is “Can I make a foundation model that … speaks ‘protein’ just like GPT4 speaks english?” . Indeed following the success of Deep Mind’s Alphafold programme to ‘solve’ protein folding (a problem previously thought unsolvable) , several of the first wave of generative Ai models in Synthetic Biology are focusing exactly on generating de novo proteins never before seen in nature (“generative protein design”) as well altering and ‘optimising’ existing natural proteins.
.Some of these generative AI tools for protein engineering have already been highlighted in the note for this online open forum – eg ProtGPT2, Protein DT and Chroma - but there are also a number of startups (beyond Gingko) focused entirely on using generative AI for creating a range of de-novo proteins for commercial markets including enzymes, catalysts, food ingredients, pharmaceuticals, biomaterials, coatings, gene therapy and more. In another example of how AI is bringing unusual tech entrants into Syn Bio, global cloud data company Salesforce has developed ProGEN yet another large language model for generating novel proteins. This model was trained by feeding the amino acid sequences of 280 million different proteins into a machine learning model> Salesforce then fine-tuned the model by priming it with 56,000 sequences from just one class of protein: lysozymes – in order to initially generate functional novel lysozymes (used for food ingredients). A report on this work in Science Daily emphasises just how huge the protein design space is for novel variation just within this one class of proteins:
“With proteins, the design choices were almost limitless. Lysozymes are small as proteins go, with up to about 300 amino acids. But with 20 possible amino acids, there are an enormous number (20 to the power of 300) possible combinations. That's greater than taking all the humans who lived throughout time, multiplied by the number of grains of sand on Earth, multiplied by the number of atoms in the universe. Given the limitless possibilities, it's remarkable that the model can so easily generate working enzymes.” - see
https://www.sciencedaily.com/releases/2023/01/230126124330.htmProteins as food ingredients is just one slice of the future engineered protein market that companies like Salesforce or Gingko are chasing. Syn Bio companies are developing engineered proteins as coatings, sweeteners, pesticides, packaging etc – including sevela uses that will involve environmental release of these novel protein entities.
Planetary boundaries for novel entities:
By itself this new AI-powered ability to generate a wider range of de novo proteins ever faster for industrial use should be regarded as a signifcicant industrial shift in production patterns for the CBD to consider in relation to its mandate - with potentially huge impacts on biodiversity in the longer term once a greater variety of engineered proteins make it int the biosphere. This will place a high requirement on monitoring, assessment and reporting as well the need to develop systems of recall, clean up and liability. A historical point of comparison might be the advent of synthetic chemistry techniques and establishment of the accompanying petrochemical-based synthetic chemical industry in the late 19th and early 20th century that flowed from new techniques to ‘crack’ hydrocarbons. That ability to generate de-novo chemical molecules before proper oversight and regulation were in place led to creation and dispersal of many thousands of different synthetic chemicals into the biosphere - many of which are now subject to complicated and difficult global efforts at clean-up or mitigation (or attempts at industrial replacement) because of the unxpected biological and health effects of those synthetic compounds interacting with the natural world. It is estimated that there are between 140,000- 350,000 different types of manufactured chemicals being released to the biosphere at approximately 220 billion tonnes per year and that the USA alone adds approx. 15000 new synthetic chemicals to the inventory every year. Most of these are new-to-nature and many are toxic at some concentration. (see this Naidu et al review article:
https://www.sciencedirect.com/science/article/pii/S0160412021002415) In early 2022 scientists reported that humans had breached the safe ‘planetary boundary’ for novel chemical entities in the biosphere.
The prospect of unleashing a new generative synthetic protein industry, undergirded with massive speculative capital intended to artificially generate an array of de novo proteins for market profits ahead of deliberate international discussion and rule-setting should by itself raise significant red flags. That it is supercharged with the current investment hype on AI is doubly worrying. Proteins have been described as intricate nanomachines whose ongoing and multiple interactions govern most life processes at the molecular level. Synthetic Proteins as a class of complex molecules are therefore more likely to be more biologically active (and disruptive) than simple synthetic chemical compounds – indeed they may be deliberately designed to speed up, slow down, transform or otherwise alter molecular biological processes at the basis of life for industrial purposes requiring more complex safety assesment . Observers have noted for example that synthetically engineered proteins appear to have different stability than naturally evolved proteins – which may raise comparisons with the persistence problems of certain classes of synthetic chemicals. (eg POPs).
It was from the enormous challenge of trying to deal with the negative effects of unassessed, poorly understood synthetic chemicals that the precautionary principle was established in environmental governance which is enshrined in the preamble to the CBD itself as well as in the objective (article 1) of the Cartagena Protocol on Biosafety. This time we have the chance to apply it before the number of novel protein entities entering the biosphere starts to mimic the trajectory of synthetic chemicals..
“Text to Protein” may mean greater distribution, dual use:
Even more concerning is that the industrial generation of de novo proteins through AI-generated syn bio under current commercial directions may move towards becoming a widely distributed, automated and difficult to manage production industry far faster than the capital-intensive industrial chemistry industry – this as a result of new protein engineering Ai tools. Just as Chat GPT or Dall-E almost overnight enabled millions of ordinary users with just a web browser to enter natural language text descriptions in order to generate synthetic media, so new foundation models are being developed for web based natural language “text-to protein” discovery. In a system like proteinDT a user can write in natural language (such as English) the characteristics that they want to see in a synthetic protein (eg high thermal stability) and the generative AI model will then generate multiple viable synthetic protein sequences that can be selected and created out of synthetic RNA strands (eg expressed by an engineered microbe or in a cell free system) – equipment that in itself is becoming more distributed.
This distributed “text-to protein” model could make oversight more difficult. For example one paper on text-to-protein generation acknowledges “Although text-based protein design has many potential positive applications in agriculture, bioengineering, and therapeutics, it can be considered a dual-use technology. Much like generative models for small molecules (Urbina et al., 2022), ProteinDT could be applied to generate toxic or otherwise harmful protein sequences. Even though acting on these designs would require a wet lab, synthesizing custom amino acid sequences is typically straightforward compared to synthesizing novel small molecules. “ - see
https://arxiv.org/pdf/2302.04611.pdf The paper further notes that the authors own model allows generation of venomous and dangerous proteins and that “Future efforts to expand the training dataset and modeling improvements could increase the dual-use risk. “ .
Synthetic biology firms such as Gingko argue that in this way it will be possible to replace existing petrochemical based production in large capital intensive facilities with fast and lighter biological production methods. While replacing petroleum-derived chemicals may be one outcome, it will only be one amongst many commercial drivers of the technology. Other commercial entities may replace natural products grown by small farmers or displace forest or marine-derived commodities – changing land and ocean use patterns. If synthetic engineered proteins become a rapidly expanding, structurally diverse and widely distributed class of novel synthetic entities they will enter the biosphere and the new engineered proteins industry will necessitate new forms of biosafety assessment and oversight. It may further worsen the overreach of the planetary boundary on novel entities..In view of this the Precautionary approach should be followed to put a halt to these developments pending a more serious review by parties. This is called for under the convention Article 7c and the Cartegena Protocol (article 1) since this novel threat to biodiversity follows (in part) from the use of living modified organsims.
Further areas of consideration from Ai/Synthetic Biology integration:
• Bio-computation/ Biointelligence : Beyond accelerating Synthetic Biology, efforts are focusing on applying Syn Bio to transform AI and computation itself toward Biology-based computing, DNA data storage and molecular circuits. This use of living engineered elements to enable computation is sometimes called BioIntelligence. Using biological elements as a substrate for computing is posited as a potential solution to resource limitations in silicon-based computation. Engineered organisms, eg. E. coli, demonstrate remarkable computing capacity - distributed across a vast number of cells. Synthetic Biologists in Kolkata India have designed e-coli bacteria to computationally solve maze problems. (see
https://thenewstack.io/distributed-e-coli-biocomputer-solves-maze-problems/ ) Australian start-up Cortical Labs has even grown a sort of brain-in-a- vat (in-vitro neural cell culture) as ‘biological processing unts’ and taught it to play the computer game ‘pong; - see
https://www.nature.com/articles/d41586-022-03229-y. While we are some years away from a synthetic biology-based form of Ai being in common use its quite feasible to see a biological turn in computing in the coming decade. It would be helpful for the CBD to consider how this new use of genetic resources and genetic processes challnges the mandate of the convention.
> Sensing and Signaling for AI Agriculture: Organisms are being engineered as sensing/ signaling components in cyberphysical systems that mix Ai with Syn Bio. Specifially companies are now developing engineered crops that respond to stressors like drought or pests by emitting fluorescence that can be sensed as AI-guided precision agriculture. Eg. InnerPlant is one synthetic biology that genetically engineers plants to emit signals when stressed, These can then be monitored by satellite and cameras on John Deere's "see and spray" systems. – see
https://www.dtnpf.com/agriculture/web/ag/blogs/machinerylink/blog-post/2023/01/17/deere-works-use-camera-technologies This amounts to integration with Artificial intelligence systems in a different way. Just as seed and chemical companies previously engineered crops to be ‘roundup -ready’ to fit with their proprietary chemicals. Now the same firms want crops to be ‘robot-ready’ to fit with their proprietary precision agriculture systems. In the case of ‘roundup-ready crops’ this led, not surprisingly, to those chemical companies establishing overbearing market dominance through technical lock-in. the same may occur through use of ‘robot-ready’ lock-ins too.
Impacts on Conservation, Sustainable Use, and Equitable Sharing:
Conservation: AI's application in mass-designing proteins, genetic parts, and organisms accelerates industrialization in Synthetic Biology across various sectors. This rapid development raises concerns about biosafety, governance, and the entry of diverse synthetic organisms into markets and the environment. Text-to-DNA and text-to-protein applications lower barriers for amateurs and small players, posing challenges in detection, monitoring, governance, and containment.
Sustainable Use: AI-driven text-to-protein generation may disrupt industries like flavors, fragrances, and cosmetics. Sustainable use of biodiversity, largely by small-scale farmers in the global South, is threatened as AI offers biosynthetic alternatives for key compounds. For example Arzeda is a leading engineered protein company (they call themselves “The Protein Design Company”) using generative Ai tools to make commercial proteins. Their first product is a de novo engineered enzyme that upgrades common but bitter components in stevia (eg reb A) to rarer sweeter components (eg Reb D and Reb M). This significantly changes the economics of stevia growing -giving incredible market power to the holder or licensee of the patent on the enzyme. If enzymes are used to upgrade low quality biomass into high quality flavours, fragrences and sweeteners that too will disrupt sustainable use and natural markets. For example Israeli startup Ambrosia Bio is working with Gingko bioworks AI platform to develop engineered enzymes to convert cheap industrial feedstocks (e.g., sugar and starch) into rare sugars and specialized ingredients such as allulose. (see
https://www.ginkgobioworks.com/2023/06/29/developing-a-more-scalable-enzymatic-process-for-allulose-with-ambrosia-bio/) Allulose is found naturally only in small quantities in plant foods such as brown sugar, maple syrup, wheat and dried fruits like figs and raisins. Being able to enzymatically convert it from cheap sugars and starches may for example take away market opportunities from maple syrup or fig producers.
Equitable Sharing of Benefits: AI's extensive use of biological data complicates governance and fair sharing of benefits from digital sequence information (DSI). generative AI rely on remixing collected DSI without proper attribution or benefit sharing, potentially leading to digital biopiracy. See more below re ‘Theft and Piracy’ under ‘lessons to learn’.
Regarding what lessons can be learned from other domains addressing artificial intelligence:
Fields of critical scholarship on AI (including AI Ethics and AI Safety) are foregrounding some of the following questions about Generative (and other ) Ai systems that are also directly relevant to addressing AI/Syn Bio integration:
1. Bias and other problems in training data.
There is an extensive literature and acknowledgement of concern among AI developers that the design, nature and selection of training datasets for foundational models (and for fine tuning) can introduce highly problematic biases in outcomes. High profile examples of AI’s bias problem include racial biases where Ai programs trained on public data scraped from the internet were either unable to recognize skin tones or recognized or presented black and brown people with racial slurs and associations. AI systems used for recruitment were found to bias strongly against women and ethnic minorities as were Ai systems for legal sentencing. Scholars have suggested that Ai systems for biodiversity data may also introduce problematic biases – for example if the training set is sourced from databases that are skewed towards urban or northern settings, where there is greater amount of data contributed by industry or an absence of indigenous or locally gathered data. Certain types of knowledge count as ‘data’ while other types of knowledge may not be encoded or be ‘cleaned up’ (all data undergoes layers of cleaning).
In the case of datasets for genomic design it will be necessary to reflect carefully what biases, absences and other warping effects the underlying training datasets may create. For example many crop genomic datasets are compiled by commercial and other research institutions who have been focused for several decades on breeding very particular traits towards industrial monocrop agriculture systems. These datasets may claim to represent genomic diversity within a crop (eg maize or beans) but may actually massively overrepresent genomic sequences for a narrow band of varieties optimized for commercial, northern, large scale agriculture systems. Microbial or viral genomic collections also may have been collected and digitized for very specific purposes or under specific conditions that make them non-representative or build bias into the training data.
2. Black box effects and hallucinations:
One of the hardest to deal with phenomena in Ai systems based on machine learning is the problem of the black box. In its simplest form this refers to fact that machine learning done by the system itself without supervision and so the ‘lessons’ and associations that it formulates into algorithms are not known to the humans operating the system (and may not be knowable). That is: the logic by which the Ai makes its decisions are an unexplainable ‘black box; . This leads to unexpected and unexplainable behaviours and outputs which may be the result of the system mis-associating or failing to make key associations that might be obvious to a human. One of the most deadly example of black box problems playing out has been when AI-guided ‘self-driving’ automobiles fail to recognize pedestrians, cyclists, children and other elements in their vicinity and may drive into them or over them.
Related to this is the phenomenom of AI ‘hallucinations’ by generative Ai systems. Large language models such as ChatGPT routinely incorporate elements in their output that appear compelling but are factually inaccurate or bizarre: Living people are described as deceased, dates are given wrongly, generative AI images of people develop additional body parts or mangled unreadable signage is created etc. While such hallucinations and black box failures can be problematic enough in the 2 dimensional and electronic domains of text, image, video or sound, they could be highly problematic if incorporated into genomic design of four dimensional living organisms or of active biological proteins released into the body or the biosphere. Genetic engineers already face problems of unexpected and emergent effects from small genomic changes (even as small as a single base pair). If an AI-designed genome was to begin to behave unpredictably or have significant side effects it may be impossible to understand why those changes have happened or to locate the cause until long after the organism or protein had entered the biosphere. It is not clear how biosafety assessment of Ai-designed organisms or proteins can proceed when the system is not even able to explain its own design rationale. In response to the wicked problems of the AI black box , the European Union is now prioritizing development of ‘Explainable Ai’. The CBD may wish to also insist that any organism, protein or other biological components designed through generative Ai must be able to provide strong explanations of its design decisions.
3: Theft and Biopiracy
While AI-generated art, video, music and other synthetic media are presented as de novo pieces of work they are in fact only new assemblages of existing material arranged according to probability by an algorithm. They depend entirely upon an underlying corpus of work created by human beings and turned into ‘tokens’ by the foundation model. In the domain of generative cultural media the recent frontline of public policy and legal battles has been artists, musicians and other cultural workers bringing AI companies to court for theft and piracy of their creative labour. Visul artists are sometimes able to recognize parts of their own work in outputs mixed together by generative platforms such as Dall-E, Midjourney or ChatGPT because their work was part of the original training dataset. There are now software programmes such as HaveIbeentrained.com where artists can upload their own images to discover if their material has been incorporated without their permission into Ai training datasets.
Just as theft and unauthorized use of cultural materials has emerged as a major legal and political fight in AI, so it is likely that the use of digital genomic sequences and other sorts of digital biological sequences in AI training datasets will further deepen and intensify the conflicts over biopiracy and digital sequence information (DSI). When Nvidia boasts that their GenSLM model contains “all the DNA data that is available - both DNA and RNA data - for virus and bacteria” that comment indicates that they have likely not received prior informed consent to use all of that collected digital sequence information to train their generative Ai model since the use of derivative DSI for generative Ai platforms is not currently written into material transfer agreements (nor were they when the original bioprospecting activity took place). Nor does there appear to be any mechanism by which the contribution of a particular digital sequence to creating a synthetic genomic sequence or proteinomic sequence can be recognized and fairly recompensed. There is not even a version of ‘haveibeentrained.com’ for the original providers and steward of genetic diversity to discover if their genomic sequences have been incorporated into generative AI synbio designs – even though the resultant orgaisms and proteins may be reaping actual commercial outcomes (such as Arzeda’s steviol conversion enzyme or AmbrosiaBio/Gingko’s Allulose enzyme).
The process withing The CBD and under the Nagoya Protocol that is addressing the thorny problems of DSI needs to urgently consider and investigate the further thorniness of how the arrival of generative AI into the Synthetic biology space depends upon mass unacknowledged theft of genomic sequences and other DSI without even attribution, let alone consent or benefit sharing.