News

How To Safely And Securely Deploy LLMs In Regulated Industries

generative artificial intelligence

AI technology has burst into the mainstream in recent times and its potential effect on almost every industry is starting to be understood. As a result, AI adoption is soaring, with a recent MIT study reporting that 88% of companies are already using generative AI in some form.

However, as with any new revolutionary technology, there are concerns involved in the use implementation of AI, particularly for those companies in regulated industries. This is leading to some hesitancy and doubt as to the best path to take on the road to AI adoption.

Issues such as copyright and misinformation have been well publicised and the resulting unease is causing some teams to hold back their AI efforts, which could be a big blow to their competitiveness.

Although these concerns are understandable, they can be alleviated with the right processes and procedures in place. By having robust data foundations and intelligence, which in turn allows for strong data governance, organisations can move forward with their AI implementation with confidence and ensure they are reaping its many benefits.

Regulated vs Unregulated

When it comes to AI innovation, although great leaps have been made, we are still in the infancy stage. Up to now, many advancements, certainly those that are in the mainstream, have come from those industries that are traditionally ‘less regulated’.

The creative industry is one particular example, where we have seen many innovative tools released to the public. This has been in the form of using text prompts to create images, or music, and more recently with the release of Sora, an AI application that can make 60-second videos out of a simple text prompt.

Conversely, the inherent reality for companies in the heavily regulated industries has made the picture look very different, and there is significantly more reluctance to pursue rapid AI innovation.

These companies tend to be more worried about issues like cyber security but much of the hesitancy to innovate in this sector stems from uncertainty around regulation. Lawmakers across the world are still in the process of assessing the potential risks and dangers involved with AI development and as such a clear, defined and universal approach to AI regulation is still lacking.

This all means that firms are uneasy to invest large amounts of time and resources into developing AI systems if they feel there is a chance that regulation will come down on them and they will have to go back to the drawing board.

Smaller models, smaller risk

One crucial way to alleviate these concerns and harness AI’s potential in a safe way is to leverage smaller large language model (LLMs).

Larger models pose a risk to companies as they scrape vast amounts of data from across the web, which can lead to them including irrelevant, poor or otherwise protected data.

Not only can this generate bad results but it can also open an organisation up to potential legal and security issues, a key consideration for those organisations in highly regulated industries.

Smaller LLMs have a much more narrow focus than larger models, and are trained with a much smaller dataset using an organisation’s own data. Additionally, by keeping models in-house, organisations don’t have to share any data with a third-party. This drastically reduces risks and frees up organisations to experiment with generative AI with the confidence and freedom that comes from knowing that data governance is ensured.

Ensure security with solid foundations

To support the use of smaller models, organisations must establish a strong data culture, and build data intelligence. It is now essential for all employees, regardless of their technical level, to be able to understand all things related to data: how it’s stored, how it’s used and how they can make best use of it to aid their decision making and innovation. A great way to do this can be through a data intelligence platform, which provides an open, unified platform for all data and governance.

A data intelligence platform again leverages internal data and users can submit requests in natural language to generate relevant responses. This democratises the use of AI throughout an organisation, again reducing the need to rely on third parties, and ensures that everyone, even those that are non-technical, is well versed in the safe and responsible use of data. This provides another layer of security and governance, giving confidence to teams to innovate safely.

As AI continues to evolve, organisations will have to evolve with it. With uncertainty around regulation still looming large, companies must look inwards and focus on the things they can control.

By empowering their teams, and establishing robust and secure data principles, organisations will put themselves in the best possible position to be able to innovate securely and make best use of AI’s transformative potential.

Russ Rawlings is RVP, Enterprise, UK&I, Databricks

Russ Rawlings
Related News
Related sized article featured image

The initiative is expected to drive private investment and create jobs in Merseyside and Teesside.

News Team
Related sized article featured image

News Team