Safeguarding security and privacy in the age of generative AI

Few digital innovations have swept the healthcare landscape with as much force as generative AI – triggered by the release of ChatGPT from OpenAI. Arriving late in 2022 with the intensity of a derecho, generative AI has forced conversations in board rooms, at professional conferences and across social media.

With deep expertise and a long history in conversational AI, Orbita’s leadership embraces its potential. Generative AI has allowed our teams to automate processes by ingesting content (including text, images, audio, spreadsheets, video and more) from our clients to streamline the delivery of solutions in ways that had previously been impossible. What used to take weeks or months can now be accomplished in hours. Likewise, the power of generative AI supports our teams in automating custom workflows in a fraction of the time it used to take.

Net-net: Solutions like Orbita’s virtual assistants – which help healthcare organizations solve pressing problems such as reducing high call volumes and facilitating stakeholder access to complex resource materials – can be made available almost instantly for immediate ROI.

 

But as with any significant paradigm shift, the dialog around generative AI has also surfaced very real and grave concerns. These run the gamut from ChatGPT dialogs taking unexpected turns into the hallucinogenic or spreading misinformation, as well as compromising privacy in highly regulated industries like healthcare and finance.

Orbita has made a concerted effort to leverage generative AI – but with guardrails to ensure our solutions meet the highest level of accuracy and security.

Proprietary content only. The solutions that Orbita creates that leverage generative AI draw only from trusted sources or our clients’ specific content. When users engage by asking a question, Orbita’s solutions return the answer based on verified, validated materials – either reflecting the exact language of the resource or a summary based on the trusted content. In either case, the response includes a direct reference to the source. Recognizing that third-party content sources may be revised and updated over time, Orbita integrations ensure responses reflect only current versions.

Safeguarding sensitive information. Orbita works closely with clients when conversations via the virtual assistant might involve sensitive data, such as protected health information (PHI) or personally identifiable information (PII). As part of the guardrails we put in place, Orbita provides the ability to leverage AI as a service and de-identify the information that is shared even when deployed in an authenticated and personalized experience.

These powerful the winds of change have left healthcare leaders wondering where to turn and how to use generative AI responsibly. They are buffeted by messages – even from technology leaders like Elon Musk and Sam Altman of OpenAI itself – that we must proceed cautiously. But, at the same time, further development and advancement has not slowed.

Perhaps the wisest assessment, however, came from Rep. Jay Obernolte of California, who has a graduate degree in AI and wrote millions of lines of code as a computer program over a 30-year career. He wrote: “We must make room for the benefits of AI technology while putting guardrails around its misuse and mitigating its potential impacts. We will need to develop new legal frameworks to answer the questions surrounding intellectual property and the creative commons that arise with generative AI technology. We must also implement federal protections for personal data to guard against the misuse of AI in violating digital privacy and fostering the spread of misinformation…” (The Hill; May 16, 2023).

Want to learn more about how Orbita is using generative AI? Email us: hello@orbita.ai.

 

Sign up now