em360tech image

The launch of ChatGPT has taken the world by storm. Workers across the country are already using the tool for everyday tasks, and its once seeming magical ability to generate reams of text within seconds has already become normalised. Yet most companies do not have AI usage policies in place —especially SMEs—with employees largely posing questions to the large language model (LLM) on an ad hoc basis. This poses fresh cyber security risks that business owners must contend with.

At the same time, we have barely scratched the surface of AI’s potential to propel business productivity. The next phase of AI’s development will be for businesses to integrate their proprietary data into LLMs, enabling them to generate radically more personalised results that are embedded into workers' everyday workflows. Selecting the right technology partner to integrate with ChatGPT allows business owners to kill both these birds with one stone, reaping the promise of greater productivity gains while mitigating the risks.

The security risks of LLMs

Every business is sitting on a vast treasure trove of data that provides the keys to unlocking greater productivity. Every email, calendar appointment, internal document, and information residing in your CRM system provides clues as to how you do business and communicate with customers. While ChatGPT is trained on billions of web pages and more books and articles than every person reading this article combined, this is the most valuable training data that most businesses have not yet tapped into.

Asking questions directly into the standard web version of ChatGPT can not only never fully maximise a business’s reservoir of organisational data; any attempt to input confidential internal information is effectively leaking data—largely unbeknownst to the employees using it. Indeed, research by LayerX predicts that 15% of employees regularly paste company data into ChatGPT. Why does this pose a threat? ChatGPT is an open-source model, which means that behind the scenes the information fed to the model will become part of the data lake it draws on to aid other users.

Take the following as a real-world example. Say after finishing a proposal for a prospective client, which includes sensitive details about their business and financials, you ask: “How can I improve this proposal?” You have effectively leaked their data into the public sphere, and though it may not be readily viewable, this information could shape and appear in other users’ results—that is, if they ask the right relevant questions.

There is also a problem with the accuracy of results. LLMs like ChatGPT naturally hold a mass of correct and incorrect information, due to the volume of unmonitored contributions inputted into them. Just like in the way people could be too trusting at citing Wikipedia entries in the early 00s, workers may not realise results are inaccurate, but this time it may be worse owing to the presentation of results in sentences often more coherent than our own thinking.

Benefits of choosing a technology partner

Accessing ChatGPT and other LLMs through a technology partner offers the best way to mitigate these cyber risks, while also tapping into the true power of a business’s proprietary data for productivity. One such partnership is the collaboration last year between Microsoft and OpenAI, which culminated in the launch of Microsoft CoPilot this year. This enables AI-powered copilots to assist workers across Microsoft 365 applications—everything from instantly creating bespoke presentations with up-to-date company revenue forecasts, to sending personalised messages to clients that drive engagement.

But crucially, a business’s proprietary data is completely protected by the technology partner. Although it is used to generate personalised results, this sensitive data is ring-fenced and prevented from being shared back into ChatGPT or another LLM’s public data model. This provides an optimal balance between productivity and security.

Privacy and security are also protected within a company. It is estimated that employees are only meant to have access to 10% of their company’s internal data. Technology partners ensure that these guardrails are upheld, by setting up what is called ‘tenant isolation’. This means, for instance, that personal details such as employment records or health history that are only meant to be shared with HR do not appear in search results.

Why SMEs first need to establish their privacy policies

Before working with a technology partner, however, businesses need to establish robust privacy and security policies as this will set the guardrails that will govern the interaction with the LLM. This is especially true for SMEs, who may not have as detailed or prescriptive policies in place. And this cohort of business is already more exposed, with insurer Hiscox estimating that one small business is hacked every 19 seconds.

This is not to say that AI is beyond an SME’s organisational reach, and only caters to larger cooperates. SMEs might encounter challenges, such as incomplete cloud migration or inadequate data organisation, hindering their ability to fully leverage AI capabilities. But the reality is that partnering with a technology expert can make it easier and safer for SMEs to overcome barriers and secure AI tools like ChatGPT into their operations—reducing costs overall.

It is exciting to see the pace at which not only the ways in which the underlying algorithms that underpin language learning models are improving but also the novel applications for how they can help enhance our everyday lives. In the workplace setting, technology partnership will be integral to this. SMEs should start exploring how they can leverage their internal data to maximise productivity and get on the front foot. Otherwise, it is likely that breaches will have already occurred within their ranks, as employees are keen to test and discover how AI can help them work faster and smarter.