The Generative AI Revolution: Key Legal Considerations for the Cannabis Industry

For better or worse, generative artificial intelligence (AI) is already transforming the way we live and work. Within two months of its initial release to the public, ChatGPT reached 100 million monthly active users, making it the fastest-growing consumer application in history. Other popular generative AI tools such as Canna-GPT, Github Copilot, and DALL-E offer powerful tools that can generate computer code, images, songs, and videos, respectively, with limited human involvement. The implications are immense and have already sparked calls for new federal regulatory agencies, a pause on AI development, and even concerns about extinction.
Off

This alert analyzes how AI is already affecting the cannabis industry, as well as some of the key legal considerations that may shape the future of generative AI tools. And click here to watch our latest Fox Forum as we talk with Mike Pell, the visionary innovation leader at Microsoft, a principal investor in OpenAI and the trailblazing company behind the creation of ChatGPT.

On April 20, 2023, Canna-GPT was released and marketed as the world’s first cannabis AI focusing solely on cannabis-related topics. Canna-GPT’s AI chatbot is trained to answer consumers’ questions about cannabis products and other topics related to cannabis education. Certain limitations to Canna-GPT jump out on its face: (1) cannabis products work differently for different people, (2) consumers with pre-existing conditions or complications should consult a medical professional before trying new products recommended by Canna-GPT, and (3) the complex cannabis legal regimes in each state, county, and city will make Canna-GPT’s answers, regarding the legality of cannabis products, likely unreliable.

Similarly, a cannabis consulting firm released Oddysee, a generative AI platform designed to support cannabis entrepreneurs with education, training, licensing, and ongoing support for operations. The consulting firm claims that Oddysee enhances existing intellectual property (IP), which was previously used to secure cannabis licenses throughout the nation for its clients. Oddysee also claims to help entrepreneurs create support materials for their cannabis applications. The main goal of Oddysee is that entrepreneurs who aspire to become licensed growers, producers, and retailers can potentially gain a deeper understanding of the complex factors involved in navigating the state license application process, while potentially lowering costs, and then such entrepreneurs may benefit from insights and knowledge to improve products, increase efficiency, and ultimately drive growth and profitability.

Below, we outline legal issues you should keep in mind while using or reviewing Canna-GPT, Oddysee, or any other cannabis-focused AI platform.

1. Accuracy and Reliability

For all their well-deserved accolades and hype, generative AI tools remain a work in progress. Users, especially commercial enterprises, should never assume that AI-created works are accurate, non-infringing, or fit for commercial use. In fact, there have been numerous recorded instances in which generative AI tools have created works that arguably infringe the copyrights of existing works, make up facts, or cite phantom sources. It is also important to note that works created by generative AI may incorporate or display third-party trademarks or celebrity likenesses, which generally cannot be used for commercial purposes without appropriate rights or permissions. Like anything else, companies should carefully vet any content produced by generative AI before using it for commercial purposes. 

2. Data Security and Confidentiality

Before utilizing generative AI tools, companies should consider whether the specific tools adhere to internal data security and confidentiality standards. Like any third-party software, the security and data processing practices for these tools vary. Some tools may store and use prompts and other information submitted by users. Other tools offer assurances that prompts and other information will be deleted or anonymized. Enterprise AI solutions, such as Azure’s OpenAI Service, can also potentially help reduce privacy and data security risks by offering access to popular tools like ChatGPT, DALL-E, Codex, and more within the data security and confidentiality parameters required by the enterprise.

Before authorizing the use of generative AI tools, organizations and their legal counsel should (1) carefully review the applicable terms of use, (2) inquire about access to tools or features that may offer enhanced privacy, security, or confidentiality, and (3) consider whether to limit or restrict access on company networks to any tools that do not satisfy company data security or confidentiality requirements.

For instance, for the medical cannabis industry, the federal Health Insurance Portability and Accountability Act of 1996 (HIPAA) created national standards to protect patient health information (PHI) from disclosure or use without the patient’s consent or knowledge, absent certain exceptions. HIPAA and its corresponding state laws are the first line of defense against threats related to the collection and transmission of sensitive PHI in connection with medicinal cannabis.

Litigation regarding AI data collection and use has begun. In one case, a recent class action lawsuit in the Northern District of California against OpenAI, the creator of ChatGPT, alleged, among other things, violation of users’ privacy rights based on data scraping of social media comments, chat logs, cookies, contact information, login credentials, and financial information. P.M. v. OpenAI LP, No. 3:23-cv-03199 (N.D. Cal. filed June 28, 2023). In this context, the ramifications for misuse of PHI are significant once generative AI integrates within the medicinal cannabis industry.

3. Product Liability

AI-powered products in the cannabis industry, such as automated cultivation systems or quality control solutions, may introduce new quality and/or product liability concerns. Manufacturers and growers should have policies and procedures in place to ensure that AI systems function properly and to identify and mitigate any potential risks associated with the technology. Determining liability in cases where AI systems cause harm or make critical decisions can be complex. As AI becomes more autonomous, existing legal frameworks will need to address questions of responsibility and accountability when AI is involved in accidents or errors.

4. Intellectual Property Protection and Enforcement

Content produced without significant human control and involvement is not protectable by US copyright or patent laws, creating a new orphan class of works with no human author and potentially no usage restrictions. That said, one key principle can go a long way to mitigating IP risk: generative AI tools should aid human creation, not replace it. Provided that generative AI tools are used merely to help with drafting or the creative process, then it is more likely that the resulting work product will be protectable under copyright or patent laws. In contrast, asking generative AI tools to create a finished work product, such as asking it to draft an entire legal brief, will likely deprive the final work product of protection under IP laws, not to mention the professional responsibility and ethical implications.

5. Future Regulation

The cannabis industry is subject to numerous specific state regulations and licensing requirements. Any AI solutions utilized must adhere to relevant state regulatory requirements, including tracking and reporting requirements, product labeling, and adherence to manufacturing and distribution standards. In addition, despite marijuana’s current status as a Schedule I substance under the Controlled Substances Act of 1970, cannabis businesses are still subject to certain federal requirements, (e.g., obligated to pay federal income taxes pursuant to the Internal Revenue Code of 1986).

Earlier this year, Italy became the first Western country to ban ChatGPT, but it may not be the last. In the United States, legislators and prominent industry voices have called for proactive federal regulation, including the creation of a new federal agency that would be responsible for evaluating and licensing new AI technology. Others have suggested creating a federal private right of action that would make it easier for consumers to sue AI developers for harm they create. Whether US legislators and regulators can overcome partisan divisions and enact a comprehensive framework seems unlikely, but as is becoming increasingly clear, these are unprecedented times. Furthermore, it remains to be seen if and to what extent any future federal regulations regarding generative AI would extend to the cannabis industry, in light of its Schedule I status under federal law.

If you have questions about any of these issues or want to plan ahead, contact one of the authors or a member of our Cannabis or AI, Metaverse & Blockchain industry teams.

Contacts

Continue Reading