Skip to main content

https://digitaltrade.blog.gov.uk/2024/03/13/ensuring-the-safe-and-effective-use-of-ai-in-the-department-for-business-and-trade/

Ensuring the safe and effective use of AI in the Department for Business and Trade

Posted by: , Posted on: - Categories: Products

 

A close up of computer circuitry with binary code displayed on an embedded microchip

Sarah Livermore

Brown haired lady wearing glasses smiling.Here at the Department for Business and Trade (DBT) we have a wealth of experience in developing ‘AI-powered’ digital products and services. This involves long-established machine learning techniques such as statistical models used to identify patterns in data.  

These models underpin some of our existing work. This can include identifying businesses that could most benefit from DBT’s export advice depending on how well-established they are for instance. 

A new era for AI 

The technical expertise required to build such models means that AI use has previously been led by data science teams. However, the rapid growth in popularity of online generative AI tools during the past year means that some form of AI is now available to many more people. 

As humans, the process of getting our thoughts down ‘on paper’ is neurologically hugely complex. Indeed, it is not possible to build a model that accurately represents how we do this. So instead, current language models are built by training statistical models on trillions of words taken from the world wide web. To represent the richness of language, the resulting models are some of the most complex ever developed, containing billions of parameters. For comparison, models used to process images typically have millions rather than billions of parameters.  

The computational power required to train these models is vast. It would take a single computer hundreds of years to train the most popular Large Language Models (LLM). Huge numbers of computers are therefore pooled together so that the training can be done in a reasonable amount of time. Consequently, only those organisations with access to such resources are currently able to develop these models. This means that the most popular models tend to be developed by ‘big tech’ companies, although some smaller start-ups and academic collaborations have also developed models. 

Therefore, if someone wanted to use AI to help write a tricky email, or even a blog post, it is not their data science team to whom they would turn. They are more likely to ask for an online service hosted outside of their organisation.  

This potentially opens up the possibility of automatically generating emails, reports and even entire books or movie scripts. However, it also poses challenges around the governance of how that work is done and where the associated data goes. For example, how we explain exactly who created the writing and who would be accountable for it.  

Ensuring safe and responsible use of AI 

To that end, at DBT, we have recently established a process to ensure that we develop and use AI safely. It maintains public trust in our services and how we handle people’s data. This governance framework covers the development of our in-house mathematical modelling, plus any use of third-party tools including generative AI.  

Right at the start of developing this process we recognised how important it was to involve external experts. We opted for a voluntary review by the Government Internal Audit Agency to determine the robustness of DBT’s AI adoption strategy. Their recommendations, such as developing a clear approval process for the use of AI, established the foundations for our AI adoption framework.  

We then commissioned experts from the Alan Turing Institute’s (ATI) Public Policy Programme to advise us on the ethical use of AI. The ATI have many years of experience advising the public sector on the safe design, development and deployment of AI-powered systems. They tailored their well-established Process Based Governance framework to show how it applied to DBT. The report for DBT is available online so that others in the public sector, and wider, can benefit.  

During the framework’s development, we also worked closely with our own experts such as our cyber security and data protection teams. This was important to ensure that the AI governance fits with existing governance processes and does not cause needless duplication.  

Establishing our framework 

Our AI governance framework has now been in place since the start of the year. Colleagues wanting to develop or use AI must provide assurances such as: 

  • ethical issues have been adequately considered, for example whether certain groups in society could be disadvantaged by the use of the AI. 
  • the work has a sound legal basis, for instance in relation to data protection law.  

We have built this framework to sit on top of our existing Information Risk Assurance Process (IRAP) which covers data protection and cyber security. This means that we maintain expert assurance in those critical areas, whilst covering new risks related to the use of AI.   

Our experts from across these teams are also learning from each other, where issues overlap. For example, our data protection and AI experts come together to assess the risks of AI generating inaccurate information about someone. This is an important consideration in UK data protection law 

We are going to use the framework around 50 times before reviewing it and seeking to iterate. We expect in that time we will have learnt a lot about how it suits DBT. Also the market will have changed to address some of the more common concerns around accuracy, data privacy and copyright infringement. 

We recognise that as a government department, we must meet a very high standard for trustworthiness by being fair, accurate and impartial. It is really important that we apply those values to any AI-driven systems we build. We engaged extensively with senior leaders across the department including our Executive Committee and Chief Scientific Advisor. This provided us with confidence in our approach. We will report back to them and share here our findings as work continues. 

 

Sharing and comments

Share this page

3 comments

  1. Comment by David H. Deans posted on

    Sarah, based on my research and work as an advisory consultant, I've uncovered a common leadership challenge associated with the effective deployment and adoption of Generative AI apps for digital business growth.

    The emergence of GenAI demand across the globe marks a new chapter in human-machine collaboration. Amidst the burgeoning capabilities of these powerful digital tools, a unique breed of individuals is poised to become the linchpin of deployment success: the "GenAI Polymath."

    That said, finding practitioners with business domain knowledge *and* AI technical skills is very difficult, due to the small available talent puddle. Are you, or another team member, researching this emerging topic in support of DBT? Have you determined the need for upskilling plans to create more multifaceted GenAI Polymaths?

  2. Comment by Zane Roett posted on

    Hi Sarah,

    I would be interested to understand how your AI framework might be different, or builds upon the recent Gov.Uk, GenAI Framework. Would your framework be fully shared, with wider Govt. organisations, if it expands upon parts of the Gov.Uk framework, please?

    Kind regards,
    Zane Roett (Dept. for Education)

    • Replies to Zane Roett>

      Comment by Zane Roett posted on

      P.s., I have just realised tat the 174 page Turing Institute document, is the actual DBT, "framework for assessing how we decide the risks and benefits of adopting AI technologies".

      Kind Regards,
      Zane Roett